Sometimes I wonder how others manage to write short posts. In
my earlier post about forecasting, I used an analogy with
medicine to make the point that an inability to predict the future does not
invalidate a science. This was not the focus of the post, so it was a single
sentence, but some comments suggest I should have said more. So here is an extended version.
The level of output depends on a huge number of things: demand
in the rest of the world, fiscal policy, oil prices etc. It also depends on
interest rates. We can distinguish between a conditional and an unconditional
forecast. An unconditional forecast says what output will be at some date. A
conditional forecast says what will happen to output if interest rates, and
only interest rates, change. An unconditional forecast is clearly much more
difficult, because you need to get a whole host of things right. A conditional
forecast is easier to get right.
Paul Krugman is rightly fond of saying that Keynesian
economists got a number of things right following the recession: additional
debt did not lead to higher interest rates, Quantitative Easing did not lead to
hyperinflation, and austerity did reduce output. These are all conditional
forecasts. If X changes, how will Y change? An unconditional forecast says what
Y will be, which depends on forecasts of all the X variables that can influence
Y.
We can immediately see why the failure of unconditional
forecasts tells us very little about how good a model is at conditional
forecasting. A macroeconomic model may be reasonably good at saying how a
change in interest rates will influence output, but it can still be pretty poor
at predicting what output growth will be next year because it is bad at
predicting oil prices, technological progress or whatever.
This is why I use the analogy with medicine. Medicine can tell
us that if we eat our 5 (or 7) a day our health will tend to be better, just as macroeconomists now believe explicit
inflation targets (or something similar) help stabilise the economy. Medicine
can in many cases tell us what we can do to recover more quickly from illness,
just as macroeconomics can tell us we need to cut interest rates in a recession.
Medicine is not a precise enough science to tell each of us how our health will
change year to year, yet no one says that because it cannot make these
unconditional predictions it is not a science.
This tells us why central banks will use macroeconomic models
even if they did not forecast, because they want to know what impact their
policy changes will have, and models give them a reasonable idea about this.
This is just one reason why Lars Syll, in a post inevitably disagreeing with me, is
talking nonsense when he says: “These forecasting models and the organization
and persons around them do cost society billions of pounds, euros and dollars
every year.” If central banks would have models anyway, then the cost of using
them to forecast is probably no more than half a dozen economists at most,
maybe less. Even if you double that to allow for the part time involvement of
others, and also allow for the fact that economists in central banks are much
better paid than most academics, you cannot get to billions!
This also helps tell us why policymakers like to use
macroeconomic models to do unconditional forecasting, even if they are no
better than intelligent guesswork, but I’ll elaborate on that in a later post.
Although not entirely related to the issue of macro forecasting, this post reminded me of Bryan Caplan's post on Foote, Gerardi, and Willen's paper on the subprime crisis.
ReplyDeleteIn short, the analysts at Lehman made excellent conditional forecasts but terrible unconditional ones.
http://econlog.econlib.org/archives/2013/05/conditional_ins.html
Simon, having said all that, suppose someone comes along and *is* able to forecast accurately, some limited set of macro variables. And say this person has a model which is very different from mainstream models. Does the mainstream have an obligation to check it out, or are they justified in ignoring it because it's not a mainstream model?
ReplyDeleteIt seems to me that if the new model compares very favorably with existing mainstream models in forecasting ability (whatever it's capable of forecasting), then the mainstream absolutely does have an obligation to try to understand why their models are inferior at forecasting these same variables.
What's your opinion?
Dear Simon,
ReplyDeleteYou write:
"This tells us why central banks will use macroeconomic models even if they did not forecast, because they want to know what impact their policy changes will have, and models give them a reasonable idea about this. This is just one reason why Lars Syll, in a post inevitably disagreeing with me, is talking nonsense when he says: “These forecasting models and the organization and persons around them do cost society billions of pounds, euros and dollars every year.” If central banks would have models anyway, then the cost of using them to forecast is probably no more than half a dozen economists at most, maybe less. Even if you double that to allow for the part time involvement of others, and also allow for the fact that economists in central banks are much better paid than most academics, you cannot get to billions!"
Well, of course, billions may be to much when it comes to central banks, but I was actually not only referring to central banks here, but the forecasting activities of all “monetary and fiscal authorities.” In that light my,admittedly, nothing more than “intelligent guess,” maybe wasn’t that nonsensical?
Best regards,
Lars
Whilst my own unintelligent guess is an order of magnitude lower than Lars', I think he must have a point here.
DeleteIf an activity, i.e. (unconditional) forecasting has little value, producing output that is no better than intelligent guesswork, then why subsidize it at all?
Good question, which I try and answer in my latest post:
Deletehttp://mainlymacro.blogspot.co.uk/2014/08/why-central-banks-use-models-to-forecast.html
One major problem with the analogy between medicine and macroeconomics is that doctors (or at any rate health services) treat many patients. A statistical tendency identified in a large enough part of that population can then be a guide to the likelihood of an individual member of that population responding to an intervention in a particular way. Unfortunately macroeconomics deals with only one 'patient' (or at any rate a small number of patients with very varied characteristics that cannot be matched in the way that clinical trial subjects are).
ReplyDeleteIt should therefore be accepted that macroeconomic predictions cannot be made inside a very wide range. Policies should thus be planned to be of benefit (or at least not disastrous!) for that wide range of possible scenarios. (Charles Manski's work seems a step in the right direction here but is probably not radical enough.) A continuing barrier to this approach is the persistent failure of neoclassical economics to model the most basic of economic institutions in a realistic way.
It may well become apparent that our best approach would be to ensure basic human needs, along with equal political and economic access for all - and then stand back and hope (and perhaps reasonably anticipate) the best.
Isn't there a further problem with the analogy - that the bodies that those doctors have treated and that form part of the evidence base for the future operate according to processes that don't change (significantly) over time and according to the shifting whims of human consciousness? In other words we can assume that many more variables remain constant. That's just not true of human behaviour, which is what economists are concerned with?
ReplyDeleteIsn't the problem that most people don't, and probably won't bother to, understand the difference between the two types of forecasts? I.e. Unconditional forecasts are easy to 'understand', base your investment decisions on and thus have excessive effects on the economy? Maybe this is the cost of unconditional forecasting, not the labour cost of hiring a few economists
ReplyDelete