In a recent post I suggested one microfoundations based argument for what Blanchard and Fischer call useful models, and I call aggregate models. Both Mark Thoma and Paul Krugman picked up on it, and I want to respond to both. While this will mostly be of interest to economists, I have not tagged this ‘for economists’ because I know from comments that some non-economists who think about the philosophy of (social) science find this interesting. If you are not in either category, this post is probably not for you.
Paul Krugman first. He makes a number of relevant points, but the bit I like best in his post is where he says “Wren-Lewis is on my side, sort of”. I really like the ‘sort of’. Let me say why.
In one sense I am on his side. I do not believe that the one and only way to think about macroeconomics is to analyse microfounded macromodels. I think too many macroeconomists today think this is the only proper way to do analysis, and this leads to a certain microfoundations fetishism which can be unhelpful. Aggregate models without microfoundations attached can be useful. On the other hand I really do not want to take sides on this issue. Most of the work I have done in the last decade has involved building and analysing microfounded macromodels, and I’ve done this because I think it is a very useful thing to do. Taking sides could too easily degenerate into a ‘for and against’ microfoundations debate – in such a debate I would be on both sides. I certainly do not agree with this: “So as I see it, the whole microfoundations crusade is based on one predictive success some 35 years ago; there have been no significant payoffs since.” The justification for aggregate models that I gave in my previous post was deliberately four square within the microfoundations methodology because I wanted to convince, not antagonise. So ‘sort of’ suits me just fine.
What I am against is what I have called elsewhere the ‘microfoundations purist’ position. This is the view that if some macroeconomic behaviour does not have clear microfoundations, then any respectable academic macroeconomist cannot include it as part of a macromodel. Why do I think this is wrong? This brings me to Mark Thoma, who linked my piece with one he had written earlier on New Old Keynesians. Part of that piece describes why economists might, at least temporarily, forsake microfounded models in favour of a ‘useful’ (to use Blanchard and Fischer’s terminology) model from the past. To quote
“The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation....”
“So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide.”
I think this identifies a second reason why an aggregate model – a model without explicit microfoundations – might be preferred to microfounded alternatives, which Paul Krugman also covers in his point (3). This has to do with the speed at which microfoundations macro develops.
Developing new microfounded macromodels is hard. It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimisation problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behaviour elsewhere, and so we need to go back to the drawing board. This internal consistency criteria is partly what gives these models their strength.
It is very important to do all this, but it takes time. It takes even longer to convince others that this innovation makes sense. As a result, the development of microfounded macromodels is a slow affair. The most obvious example to me is New Keynesian theory. It took many years for macroeconomists to develop theories of price rigidity in which all agents maximised and expectations were rational, and still longer for them to convince each other that some of these theories were strong enough to provide a plausible basis for Keynesian type business cycles.
A more recent example, and one more directly relevant to Mark Thoma’s discussion, is the role of various financial market imperfections in generating the possibility of a financial crisis of the type we have recently experienced. There is a lot of important and fascinating work going on in this area: Stephen Williamson surveys some of it here. But it will take some time before we work out what matters and what does not. In the meantime, what do we do? How should policy respond today?
To answer those questions, we will have to fall back on models that contain elements that appear ad hoc, by which I mean that they do not as yet have clear and widely accepted microfoundations. Those models may contain elements discussed by past economists, like Keynes or Minsky, who worked at a time before the microfoundations project took hold. Now microfoundations purists would not (I would hope) go so far as to say that kind of ad hoc modelling should not be done. What they might well say is please keep it well away from the better economic journals. Do this ad hoc stuff in central banks, by all means, but keep it out of state of the art academic discourse. (I suspect this is why such models are sometimes called ‘policy models’.)
This microfoundations purist view is a mistake. It is a mistake because it confuses ‘currently has no clear microfoundations’ with ‘cannot ever be microfounded’. If you could prove the latter, then I would concede that – from a microfoundations perspective – you would not be interested in analysing this model. However our experience shows that postulated aggregate behaviour that does not have a generally accepted microeconomic explanation today may well have one tomorrow, when theoretical development has taken place. New Keynesian analysis is a case in point. Do the purists really want to suggest that, prior to 1990 say, no academic paper should have considered the implications of price stickiness?
So here I would suggest is a second argument for using aggregate (or useful, or ad hoc) models. Unlike my first, it allows these models not to have any clear microfoundations at present. Such analysis should be respected if there is empirical evidence supporting the ad hoc aggregate relationship, and if the implications of that relationship could be important. In these circumstances, it would be a mistake for academic analysis to have to wait for the microfoundations work to be done. (This idea is discussed in more detail in “Internal consistency, price rigidity and the microfoundations of macroeconomics” Journal of Economic Methodology (2011) Vol. 18, 129-146 - earlier version here.).