Monday, 3 September 2012

What type of model should central banks use?


As a follow up to my recent post on alternatives to microfounded models, I thought it might be useful to give an example of where I think an alternative to the DSGE approach is preferable. I’ve talked about central bank models before, but that post was partly descriptive, and raised questions rather than gave opinions. I come off the fence towards the end of this post.

As I have noted before, some central banks have followed academic macroeconomics by developing often elaborate DSGE models for use in both forecasting and policy analysis. Now we can all probably agree it is a good idea for central banks to look at a range of model types: DSGE models, VARs, and anything else in between. (See, for example, this recent advert from Ireland.) But if the models disagree, how do you judge between them? For understandable reasons, central banks like to have a ‘core’ model, which collects their best guesses about various issues. Other models can inform these guesses, but it is good to collect them all within one framework. Trivially you need to make sure your forecasts for the components of GDP are consistent with the aggregate, but more generally you want to be able to tell a story that is reasonably consistent in macroeconomic terms.

Most central banks I know use structural models as their core model, by which I mean models that contain equations that make use of much more economic theory than a structural VAR. They want to tell stories that go beyond past statistical correlations. Twenty years ago, you could describe these models as Structural Econometric Models (SEMs). These used a combination of theory and time series econometrics, where the econometrics was generally at the single equation level. However in the last few years a number of central banks, including the Bank of England, have moved towards making their core model an estimated DSGE model. (In my earlier post I described the Bank of England’s first attempt which I was involved with, BEQM, but they have since replaced this with a model without that core/periphery design, more like the conical ECB model of Smets-Wouters.)

How does an estimated DSGE model differ from a SEM? In the former, the theory should be internally consistent, and the data is not allowed to compromise that consistency. As a result, data has much less influence over the structure of individual equations. Suppose, for example, you took a consumption function from a DSGE model, and looked at its errors in predicting the data. Suppose I could show you that these errors were correlated with asset prices: when house prices went down, people saved more. I could also give you a good theoretical reason why this happened: when asset prices were high, people were able to borrow more because the value of their collateral increased. Would I be allowed to add asset prices into the consumption function of the DSGE model? No, I would not. I would instead have to incorporate the liquidity constraints that gave rise to these effects into the theoretical model, and examine what implications it had for not just consumption, but also other equations like labour supply or wages. If the theory involved the concept of precautionary saving, then as I indicated here, that is a non-trivial task. Only when that had been done could I adjust my model.

In a SEM, things could move much more quickly. You could just re-estimate the consumption function with an additional term in asset prices, and start using that. However, that consumption function might well now be inconsistent with the labour supply or wage equation. For the price of getting something nearer the data, you lose the knowledge that your model is internally consistent. (The Bank’s previous model, BEQM, tried to have it both ways by adding variables like asset prices to the periphery equation for consumption, but not to the core DSGE model.)

Now at this point many people think Lucas critique, and make a distinction between policy analysis and forecasting. I have explained elsewhere why I do not put it this way, but the dilemma I raise here still applies if you are just interested in policy analysis, and think internal consistency is just about the Lucas critique. A model can satisfy the Lucas critique (be internally consistent), and give hopeless policy advice because it is consistently wrong. A model that does not satisfy the Lucas critique can give better (albeit not perfectly robust) policy advice, because it is closer to the data.

So are central banks doing the right thing if they make their core models estimated DSGE models, rather than SEMs? Here is my argument against this development. Our macroeconomic knowledge is much richer than any DSGE model I have ever seen. When we try and forecast, or look at policy analysis, we want to use as much of that knowledge as we can, particularly if that knowledge seems critical to the current situation. With a SEM we can come quite close to doing that. We can hypothesise that people are currently saving a lot because they are trying to rebuild their assets. We can look at the data to try and see how long that process may last. All this will be rough and ready, but we can incorporate what ideas we have into the forecast, and into any policy analysis around that forecast. If something else in the forecast, or policy, changes the value of personal sector net assets, the model will then adjust our consumption forecast. This is what I mean about making reasonably consistent judgements.

With a DSGE model without precautionary saving or some other balance sheet recession type idea influencing consumption, all we see are ‘shocks’:  errors in explaining the past. We cannot put any structure on those shocks in terms of endogenous variables in the model. So we lose this ability to be reasonably consistent. We are of course completely internally consistent with our model, but because our model is an incomplete representation of the real world we are consistently wrong. We have lost the ability to do our second best.

Now I cannot prove that this argument against using estimated DSGE models as the core central bank model is right. It could be that, by adding asset prices into the consumption function – even if we are right to do so – we make larger mistakes than we would by ignoring them completely, because we have not properly thought through the theory. The data provides some check against that, but it is far from foolproof. But equally you cannot prove the opposite either. This is another one of those judgement calls.

So what do I base my judgement on? Well how about this thought experiment. It is sometime in 2005/6. Consumption is very strong, and savings are low, and asset prices are high. You have good reason to think asset prices may be following a bubble. Your DSGE model has a consumption function based on an Euler equation, in which asset prices do not appear. It says a bursting house price bubble will have minimal effect. You ask your DSGE modellers if they are sure about this, and they admit they are not, and promise to come back in three years time with a model incorporating collateral effects. Your SEM modeller has a quick look at the data, and says there does seem to be some link between house prices and consumption, and promises to adjust the model equation and redo the forecast within a week. Now choose as a policy maker which type of model you would rather rely on.               

4 comments:

  1. They should use Stock Flow Consistent models that predicted the crisis, unlike DSGE models.

    http://www.voxeu.org/article/no-one-saw-coming-or-did-they

    http://www.levyinstitute.org/pubs/wp_494.pdf

    ReplyDelete
  2. Simon

    You get closer and closer with every post to accepting the logic behind stock flow consistent modelling (see post above) as the alternative to DGSE

    Balance sheet effects and the impact of asset prices (collateral) are easily explained within such models as they are based on agents balance sheets and the effects of those balance sheets on financial services.

    These ensure internal consistency as the quadruple accounting rule ensures that every asset and every price has a counterparty and the interrelations of those assets and prices can be modelled using complexity theory. The maths simply drop out of the accurate accounting.

    My own latest model is an SFC model of banking including collateral and recollaterolisation which shows how asset price inflation expands the 'lending power' of banks (that is the maximum extent they can expand there balence sheets and add to aggregate demand in the short run). See http://andrewlainton.wordpress.com/2012/07/22/correctly-modelling-reserves-cost-of-funding-and-collateral-in-monetary-circuit-theory/
    and http://andrewlainton.wordpress.com/2012/07/29/recollaterolisation-and-collaterol-chains-in-the-monetary-circuit-theory-banking-model/

    ReplyDelete
  3. Was going to leave a comment about Stock Flow Consistent models...saw these guys had beat me to it.

    Here is a very short introduction:

    http://nakedkeynesianism.blogspot.com/2012/07/stock-flow-with-consistent-accounting.html

    ReplyDelete
  4. I'm sorry but I have to disagree with your assessment. It's true if you take a very hardcore microfoundations stand (the kind that says we should never use things like convex capital or investment costs or portfolio adjustment costs, or Calvo pricing or quad because they're too ad hoc), then yes: obviously there's a tradeoff between saying something fast but not so well and saying something that could be deeper and more coherent but could take too long for policy or private sector decision cycles. But the types of DSGE models used as core models in some institutions are not like this. They allow you some flexibility to take into account things like precautionnary saving through discount factor shocks, or credit constraints through "risk" or financing premia on bonds. In short, you can insert the effect of house prices through shocks to the Euler equation you mention. The advantage of this is that it's easier to link to the more microfounded models than in a SEM, because we have worked out examples in which certain types of credit constraints and precautionnary saving in response to for example higher unemployment risk in a recession map into these "risk" premia (Smets and Wouters' terminology) or discount factor shocks (also known as preference shocks). You also get to at least partially keep the consistency of the DSGE approach in terms of aggregate resource constraints and stock/flow distinctions which can be lost or hard to uncover in some of the SEM models. The formalisation of this started with Business Cycle Accounting (look for Chari,Kehoe,Mcgrattan (2007), though they may disagree with using their approach in this way). Look also at Caldara and Harrison's work on this which was at least started at the BoE in https://editorialexpress.com/cgi-bin/conference/download.cgi?db_name=CEF2012&paper_id=390
    and the references they cite.
    The way you talk about SEM's as being highly adaptable and capable of capturing new effects is quite seductive, but if you really have to do it 1 week, you'll most probably find some really bad instrument to control for any endogeneity, or you won't find any instrument and just run OLS ignoring any endogeneity and by then you're squarely back to where this nobel prize winner was in 1980 when questioning the link between macroeconomics and reality:
    http://www.ekonometria.wne.uw.edu.pl/uploads/Main/macroeconomics_and_reality.pdf
    and in his follow up (which is also critical of some of the policy DSGE models, but hey economists like being critical of everything and still finding something to say afterwards),
    http://sims.princeton.edu/yftp/CBModels/
    .
    Bottom line, it's hard to find free lunches. The main advantage of SEMs in my mind is the usual one whenever a new competing way of doing things appears: if you're more experienced with a certain mode of analysis, then it may be better to use even if it's not the best. This suggests that one of the main reasons SEMs would dominate DSGE models in many settings is because the managers and much of the staff is not the youngest. In which case using a DSGE model as the core becomes more popular as some old people retire and new Phd's come in (and new Master's students as well as DSGE modeling gradually filters into the good master's curricula). And yes, in 15 years it's very possible the same comments will apply to why we're not using ACE models instead as the core model. Sorry for the long rant. But I just can't agree with your position.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.