Winner of the New Statesman SPERI Prize in Political Economy 2016


Saturday, 11 August 2012

Handling complexity within microfoundations macro


In a previous post I looked at a paper by Carroll which suggested that the aggregate consumption function proposed by Friedman looked rather better than more modern intertemporal consumption theory might suggest, once you took the issue of precautionary saving seriously. The trouble was that to show this you had to run computer simulations, because the problem of income uncertainty was mathematically intractable. So how do you put the results of this finding into a microfounded model?

While I want to use the consumption and income uncertainty issue as an example of a more general problem, the example itself is very important. For a start, income uncertainty can change, and we have some evidence that its impact could be large. In addition, allowing for precautionary savings could make it a lot easier to understand important issues, like the role of liquidity constraints or balance sheet recessions.

I want to look at three responses to this kind of complexity, which I will call denial, computation and tricks. Denial is straightforward, but it is hardly a solution. I mention it only because I think that it is what often happens in practice when similar issues of complexity arise. I have called this elsewhere the streetlight problem, and suggested why it might have had unfortunate consequences in advancing our understanding of consumption and the recent recession.

Computation involves embracing not only the implications of the precautionary savings results, but also the methods used to obtain them as well. Instead of using computer simulations to investigate a particular partial equilibrium problem (how to optimally plan for income uncertainty), we put lots of similar problems together and use the same techniques to investigate general equilibrium macro issues, like optimal monetary policy.

This preserves the internal consistency of microfounded analysis. For example, we could obtain the optimal consumption plan for the consumer facing a particular parameterisation of income uncertainty. The central bank would then do its thing, which might include altering that income uncertainty. We then recompute the optimal consumption plan, and so on, until we get to a consistent solution.

We already have plenty of papers where optimal policy is not derived analytically but through simulation.(1) However these papers typically include microfounded equations for the model of the economy (the consumption function etc). The extension I am talking about here, in its purest form, is where nothing is analytically derived. Instead the ingredients are set out (objectives, constraints etc), and (aside from any technical details about computation) the numerical results are presented – there are no equations representing the behaviour of the aggregate economy.

I have no doubt that this approach represents a useful exercise, if robustness is investigated appropriately. Some of the very interesting comments to my earlier post did raise the question of verification, but while that is certainly an issue, I do not see it as a critical problem. But could this ever become the main way we do macroeconomics? In particular, if results from these kinds of black box exercises were not understandable in terms of simpler models or basic intuition, would we be prepared to accept them? I suspect they would be a complement to other forms of modelling rather than a replacement, and I think Nick Rowe agrees, but I may be wrong. It would be interesting to look at the experience in other fields, like Computable General Equilibrium models in international trade for example.

The third way forward is to find a microfoundations 'trick'. By this I mean a set up which can be solved analytically, but at the cost of realism or generality. Recently Carroll has done just that for precautionary saving, in a paper with Patrick Toche. In that model a representative consumer works, has some probability of becoming unemployed (the income uncertainty), and once unemployed can never be employed again until they die. The authors suggest that this set-up can capture a good deal of the behaviour that comes out of the computer simulations that Carroll discussed in his earlier paper.

I think Calvo contracts are a similar kind of trick. No one believes that firms plan on the basis that the probability of their prices changing is immutable, just as everyone knows that one spell of unemployment does not mean that you will never work again. In both cases they are a device that allows you to capture a feature of the real world in a tractable way.

However, these tricks do come at a cost, which is how certain we can be of their internal consistency. If we derive a labour supply and consumption function from the same intertemporal optimisation problem, we know these two equations are consistent with each other. We can mathematically prove it. Furthermore, we are content that the underlying parameters of that problem (impatience, the utility function) are independent of other parts of the model, like monetary policy. Now Noah Smith is right that this contentment is a judgement call, but it is a familiar call. With tricks like Calvo contracts, we cannot be that confident. This is something I hope to elaborate on in a subsequent post. 

This is not to suggest that these tricks are not useful – I have used Calvo contracts countless times. I think the model in Carroll and Toche is neat. It is instead to suggest that the methodological ground on which these models stand is rather shakier as a result of these tricks. We can never write ‘I can prove the model is internally consistent’, but just ‘I have some reasons for believing the model may be internally consistent’. Invariance to the Lucas critique becomes a much bigger judgement call.

There is another option that is implicit in Carroll’s original paper, but perhaps not a microfoundations option. We use computer simulations of the kind he presents to justify an aggregate consumption function of the kind Friedman suggested. Aggregate equations would be microfounded in this sense (there need be no reference to aggregate data), but they would not be formally (mathematically) derived. Now the big disadvantage of this approach is that there is no procedure to ensure the aggregate model is internally consistent. However, it might be much more understandable than the computation approach (we could see and potentially manipulate the equations of the aggregate model), and it could be much more realistic than using some trick. I would like to add it as a fourth possible justification for starting macro analysis with an aggregate model, where aggregate equations were justified by references to papers that simulated optimal consumer behaviour.  

(1) Simulation analysis can make use of mathematically derived first order conditions, so the distinction here is not black and white. There are probably two aspects to the distinction that are important for the point at hand, generality and transparency of analysis, with perhaps the latter being more important. My own thoughts on this are not as clear as I would like.

3 comments:

  1. Can you define what you mean by "internal consistency"?

    For example, one can imagine a bubble in which everyone plans on selling the asset for more money to someone else, just as most families imagine that their children are above average. Each individual can believe that their own wage will not rise with inflation even if they simultaneously believe that aggregate wages will, etc.

    Similarly, even at the individual level, you see time inconsistency.

    So should macro models with these features (time inconsistency for each agent and aggregate inconsistency) not be considered?

    Or perhaps I am misinterpreting you -- what is the definition of internal consistency that you would like your model to have, and more importantly, why?

    ReplyDelete
  2. It seems to me that this income uncertainty effect is what macroeconomists have been searching for from the start. Unfortunately it is very hard to model. There will be plenty of computation, but I think ultimately we will need to look for some tricks to make this amenable to analytical approaches before we get any real insights from it.

    ReplyDelete
  3. Fed subsidizing low interest rates on federal debt? Silly argument.

    http://socialmacro.blogspot.com/2012/08/fed-subsidizing-low-interest-rates-on.html

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.