Models Should Come With a Warning Label

by Brent Gloy

It always used to make me laugh when I would buy a tool, open the owner’s manual, and have to wade through pages of warnings about all the really stupid things you could do to hurt yourself with it.  After reading the 10 pages of warnings, you finally get to 1 page of simple directions on how to actually assemble or use the tool.

We all know that those warnings are there because some poor sap put his finger on the blade of a reciprocating saw and started it up.  So, I guess it is probably a good thing that we have a lot of warnings about how the stuff we buy can hurt us.  I am really beginning to think that the same warnings should be applied to the models that seem to be so prevalent in society today.

Warning Labels

In that spirit, I’ll share just five of the warnings I would attach to models that involve complex, dynamic systems, especially where human behavior can alter the outcomes.

(1) Model is for informational and educational purposes only.  The results are based on a drastic simplification of the current environment.   As such, the model will not, and cannot, predict the future perfectly.  Rather the model should be used to help understand how policy, changes to behavior, and other shocks to the system might increase or decrease the predicted value.

(2) Model is only forecasting some of the factors of interest. Other factors may be outside of the scope of the model. Results should only be considered for the outputs for which the model was designed.

(3) If this model does not report a range of possible outcomes with probabilities assigned to them, you should not pay any attention to this model and it is for entertainment purposes only.

(4) Never, under any circumstances, should actual outcomes be judged by comparing them to the predictions of this model. Specifically, this model is not useful in grading policy responses after the fact.  Just because the model predicted a lot more of the outcome and we got less does not mean that we did a good or bad job.  It simply means that the results are different and could be so for many, many reasons.

(5) Model is one representation of the situation. There are many other models that are likely available to evaluate this same situation.  The results of this model may vary greatly from those produced by other models.  Some models may perform well in certain situations and poorly in others.  Users should consider how and why results differ across models.

Using Models

So why would we use models?  Well, just like that reciprocating saw, it turns out models can be really valuable and useful in helping us understand a situation, how it might evolve, and how we might shape the outcome. In economics we often use models.  Perhaps the most common economic model is the simple supply and demand framework.  If we knew all of the factors that go into determining the supply and demand curve (and the outcome we are modeling only depended upon supply and demand) we could predict the outcome of situations that altered these curves.  Fortunately, most people understand that supply and demand is a framework for thinking about how things might work in the world.  Often, things work like supply and demand predict, so we use that as a conceptual framework.  But frequently the magnitudes of the outcomes can diverge quite substantially.  In other words, if we can get the direction right we are usually doing well.

Models are abstractions of much more complex systems.  These abstractions can be useful to help us understand how some factors influence the behavior of systems.  However, we must also be cautious because the systems we are trying to model are usually extremely complex and dynamic.  In social science, most of the phenomena that we care about are really complex. If they were simple we wouldn’t need a complex model to understand them.  As a result, we take most of the model predictions as helpful inputs in our decision-making process, but by no means should we ever do something simply because “the model says so.

Models can give us a sense of how the world might work.  How things might react if we make changes, and this is extremely useful.  It’s just important to remember the limitations. We can all become better consumers of the information provided by models.  In my opinion, we’d all be a bit better off if we understood some of the limitations and pushed for more transparency from the model-building experts.  It’s not to question their credibility, but to help us make better decisions.

Wrapping it Up

All this brings me to one of my favorite quotes,

“In some ways, predicting the economy is even more difficult than forecasting the weather, because the economy is not made up of molecules whose behavior is subject to the laws of physics, but rather of human beings who are themselves thinking about the future and whose behavior may be influenced by the forecasts that they or others make.”

Ben Bernanke –“Flexibility and Optimism in an Unpredictable World” Boston College Law Review. September 2009; pg. 942.

In short, models that rely on (or are subject to) human behavior are prone to much greater error than models of, say, rabbits in a confined environment.  Let’s keep that in mind moving forward.  Don’t throw the models out, but please, let’s consume them with a bit of caution and realize that the outcomes of the biological or physical models take no account of what is happening on the economic side of the world, and vice versa.


*This post was originally published for AEI Premium readers. The entire text – including the full list of ten warning labels- can be accessed by users here, or via the risk-free 30-day trial here. Click here to learn more about AEI Premium.


Click here to subscribe to AEI’s Weekly Insights email and receive our free, in-depth articles in your inbox every Monday morning.

You can also click here to visit the archive of articles – hundreds of them – and to browse by topic. We hope you will continue the conversation with us on Twitter and Facebook.