Winner of the New Statesman SPERI Prize in Political Economy 2016

Sunday, 17 August 2014

Why central banks use models to forecast

One of the things I really like about writing blogs is that it puts my views to the test. After I have written them of course, through comments and other bloggers. But also as I write them.

Take my earlier post on forecasting. When I began writing it I thought the conventional wisdom was that model based forecasts plus judgement did slightly better than intelligent guesswork. That view was based in part on a 1989 survey by Ken Wallis, which was about the time I stopped helping to produce forecasts. If that was true, then the justification for using model based forecasting in policy making institutions was simple: even quite small improvements in accuracy had benefits which easily exceeded the extra costs of using a model to forecast.

However, when ‘putting pen to paper’ I obviously needed to check that this was still the received wisdom. Reading a number of more recent papers suggested to me that it was not. I’m not quite sure if that is because the empirical evidence has changed, or just because studies have had a different focus, but it made me think about whether this was really the reason that policy makers tended to use model based forecasts anyway. And I decided it was probably not.

In a subsequent post I explained why policymakers will always tend to use macroeconomic models, because they need to do policy analysis, and models are much better at this than unconditional forecasting. Policy analysis is just one example of conditional forecasting: if X changes, how will Y change. To see why this helps to explain why they also tend to use these models to do unconditional forecasting (what will Y be), let’s imagine that they did not. Suppose instead they just used intelligent guesswork.

Take output for example. Output tends to go up each year, but this trend like behaviour is spasmodic: sometimes growth is above trend, sometimes below. However output tends to gradually revert to this trend growth line, which is why we get booms and recessions: if the level of output is above the trend line this year, it is more likely to be above than below next year. Using this information can give you a pretty good forecast for output. Suppose someone at the central bank shows that this forecast is as good as those produced by the bank’s model, and so the bank reassigns its forecasters and uses this intelligent guess instead.

This intelligent guesswork gives the bank a very limited story about why its forecast is what it is. Suppose now oil prices rise. Someone asks the central bank what impact will higher oil prices have on their forecast? The central bank says none. The questioner is puzzled. Surely, they respond, higher oil prices increase firms’ costs leading to lower output. Indeed, replies the central bank. In fact we have a model that tells us how big that effect might be. But we do not use that model to forecast, so our forecast has not changed. The questioner persists. So what oil price were you assuming when you made your forecast, they ask? We made no assumption about oil prices, comes the reply. We just looked at past output.

You can see the problem. By using an intelligent guess to forecast, the bank appears to be ignoring information, and it seems to be telling inconsistent stories. Central banks that are accountable do not want to get put in this position. From their point of view, it would be much easier if they used their main policy analysis model, plus judgement, to also make unconditional forecasts. They can always let the intelligent guesswork inform their judgement. If these forecasts are not worse than intelligent guesswork, then the cost to them of using the model to produce forecasts - a few extra economists - are trivial.


  1. James Mitchell: 'Where are we now? The UK Recession and Nowcasting GDP Growth using Statistical Models' (2009, NIESR) is an interesting comparison of when purely statistical models do and don't outperform models.

  2. I'd argue another reason for models is that they provide an air of bureaucratic impartiality and also function as cover-your-arse exercises - you can always blame the model. Informed guesswork on the other hand will cause the forecaster's personal judgment and intelligence to be questioned, which can impede their careers. By extension it can also cause people to question how the central bank is run as they keep filling an important position with apparently incompetent people - better to "bureaucratize" the position by merely requiring formal competence in using models.

  3. "However output tends to gradually revert to this trend growth line, which is why we get booms and recessions: if the level of output is above the trend line this year, it is more likely to be above than below next year."

    I just wonder though whether these models give people a false sense of security and lead to hubris. Do we ever really return to a steady state? Take big structural shifts in recent decades (the end of recovery from WWII devastation -which by the way post Keynesians like Angus Maddison pointed out back in the 80s but seems to have been revived recently in relation to secular stagnation - results in lower multiplier effects from macro-economic policy; the collapse of the Soviet Union which opened up new energy supplies, the reentry of China into the international economy and twenty percent of the world's population to it - which on the one hand lowered prices of manufactures as so much of low-skilled manufactures production moved over to it (and also put pressures on DC wages and perhaps as a result incomes and possibly leading to a shrinking tax base in industrialised countries) - on the other hand put renewed pressures on food and commodity prices, and labour competition - arguably both with effects on societies world-wide, technological change, more generally the hollowing out of the industrial base in many advanced countries, wars, terrorist attacks, the reduced ability of the US to act as a locomotive in the system as its weight in the international economy falls, and therefore, with it, its authority..

    Those are just some of the international ones, now start on the national ones, starting with immigration and population growth including the impact on the labour market and social services of over a million low wage workers from Eastern Europe alone since 2004...

    I guess I am asking is there really such a thing as a singular stationary trend when all this is going on? Even some of the business cycles may be related to these major structural shifts and not independent of them. The world is dynamic, it is changing. That is the nature of history. Can we really talk about returns to a trend?

  4. SWL always makes me feel like I just read something that was well written and made a good point, but then, for the life of me, I realize, I have no idea what the point was.

    I think the conclusion is this: economic forecasts are worthless for all advertised purposes. However, there are conversations that economists can imagine where instead of getting defensive, a banker can point to the words in an economic forecast and say "Because of that over there, with the circles and arrows and the numbers on the back indicating what each one is."

    And that's ok, because trillions of dollars are at stake and economists only cost about $100,000 per year, per economist.

    Even on those meager terms, two obvious questions arise:
    1. Aren't you forgetting about all the money spent educating those economists? Including the salary and time of SWL?
    2. When exactly are these conversations taking place? Where central bankers are forced by Congress to give clear and explicit answers to questions about the future?

  5. The real answer is quite simple: the fable of the emperors new clothes only works because everyone knows the difference between clothed and naked. To understand the similar pronouncement about modeling, you usually have to get two years into a PhD program, and by then you are invested both financially and emotionally in the magic emperor clothes business.

    1. You are right. I would never advise people to do a PHD in economics, especially in macro-economics, unless you really want to live the real life experience to fully understand the meaning of diminishing returns. Also do a little youtubing on Sargent, Lucas and Prescott and you get a feel for what people that do it are like.

  6. Non-economist here. I have just caught up with your series of posts on forecasting. This is a comment on the series rather than just this post.

    It was good to see your comment that economic forecasting is no better than intelligent guesswork. The interesting question is who exactly disagrees with this statement?

    What would a non-economist say? Any business executive will tell you that demand forecasting for existing products is based on intelligent guesswork. Demand forecasting for new products is often based on a finger in the air estimate. Accountants will tell you of the difficulties involved in the annual budgeting process which requires forecasting of spending up to a year in advance. Anyone who has ever speculated on financial markets knows the difficulties involved in forecasting the prices of currencies, shares, commodities etc. Even the person in the street who bets on the results of trivial events like soccer matches, tennis matches or Strictly Come Dancing will tell you of the difficulties involved in making accurate forecasts.

    You make a comparison with medicine. I agree. We don’t expect medical doctors to forecast precisely who will succumb to a disease or when. No-one expects doctors to forecast the precise spread of a contagious disease such as the current Ebola outbreak in Africa.

    In seismology, no-one expects the experts to make any forecast at all about the timing of earthquakes. If anyone did make such forecasts they would be regarded as a crank until and unless they could demonstrate that their forecasts were accurate.

    So who is it that expects precise forecasts in macroeconomics? It’s not non-economists or scientists in other fields. It is economists who discuss their subject as a social science equivalent of physics and who draw analogies with the abilities of astronomers to predict the paths of heavenly bodies. For non-economists, the problem with macro-economic forecasting is that economists often appear to vastly overrate their own forecasting abilities and also that they base the legitimacy of their proposed policy solutions on these abilities.

    Forecasting is not an end in itself. It is just one important element in managing a system. That’s true whether the system is a business or a trading system or a human body. For example, a business which recognises that its forecasts are merely intelligent guesses will supplement the forecasts with other capabilities:

    Risk management: If the forecast is likely to be wrong then what are the associated risks and how should we manage them?
    Target setting and measurement: How accurate are the forecasts? Are they getting better or worse? How accurate do they need to be to be useful? What are the inherent limitations of ANY forecast in an uncertain world? What is the target for improvement? Is the target realistic? What changes might we make to deliver the target for improvement?
    Business agility: How can we configure the business to allow it to respond quickly and effectively when actual results show a deviation from the forecast?
    Contingency planning: If a major unexpected event occurs then what actions should we take?
    Learning and problem solving: How do we diagnose and solve unexpected problems? How do we reduce the likelihood of recurrences of the same problems in the future?

    It is the absence of these capabilities in economics, or sometimes even any apparent understanding of the need for them, that lead non-economists to distrust economists (and politicians). It is not the accuracy of the forecasting that is the problem.

    (cont’d below)

  7. On another aspect of forecasting, you recently posted that Nate Silver’s accurate forecasting of the result of the 2012 US Presidential election made him “an appropriate hero for the nerds”. I have two points.

    First, the US Presidential election system is not particularly difficult to forecast. There are only 51 states. All electoral votes in 2012 went to the winner in each state. There are only two realistic winners in each state with virtually no tactical voting. Many results are foregone conclusions. In 2012, 19 states were won with a majority of over 20% of the total vote; a further 17 states had a majority of 10-20%; a further 11 states had a majority of 5-10%. That leaves just four states where the majority was less than 5% and where the outcome was in significant doubt. Finally, the analysis was based on opinion polls so involved collation of existing polling data rather than the use of the types of mathematical model used in economics. Silver did well but the forecasting problem was not particularly difficult. See table on page 6 of document below for actual 2012 results.

    Second, Silver recently attempted to forecast the outcome of the soccer World Cup. He failed. His predicted winners lost 7-1 in the semi-finals.

    The Guardian compared three sets of forecasts for the matches prior to the semi-finals: Nate Silver, Goldman Sachs and a cat parasite. The cat parasite was as good as Silver and better than Goldman Sachs – and, unlike Silver, the parasite forecast both of the semi-finals and the final correctly too.

    The Guardian also did an assessment of the likelihood of a 7-1 score in the semi-final, and the implication of such rare events for all forecasting.

    I have nothing against Nate Silver, and he did well with his election predictions. However, he is not a forecasting hero.

    1. There is a really important point here: why forecast what you know, with certainty, you cannot possibly forecast?

      You see answers like: policy makers ask us to. But policy makers ask for a lot of things. The answer has to do with a science where absolutely no claim is falsifiable and yet every claim is false. The answer is that once you start acknowledging the limits of Econ, there's no principled line to draw between the good Econ and the bad.

      That is to say, all of Econ is as valid as macro-forecasting.

    2. "the problem with macro-economic forecasting is that economists often appear to vastly overrate their own forecasting"

      Unfortunately you are seeing macroeconomics as the media portrays it, not as it actually is. All academic macroeconomists that I know agree that unconditional macro forecasting is little better than intelligent guesswork. So do those in policy making institutions, which is why the Bank of England publish their fan charts. What you are reacting to are the minority of macroeconomists who are employed by financial firms because forecasts get media exposure and because they think their clients want forecasts. Even in their case most would be honest about their capabilities if you asked them, but the media never does.

    3. This is a new one. It is along the lines of a Noah Smith post: slowly backing up the scope of academic economics until it disappears completely. Here SWL says two kinds of forecasts exist: the ones which the media covers and the good ones academics do. The good ones always include several conditions, one of which is impossible. Then, when the forecast is wrong the academic says: but the impossible thing didn't happen, therefore I'm still right. Heads I win, tails you lose.

    4. You need to read my posts more carefully if you want to make constructive comments.


Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time.