It is a great irony that the microfoundations project, which was
meant to make macro just another application of microeconomics, has
left macroeconomics with very few friends among other economists. The
latest broadside
comes from Paul Romer. Yes it is unfair, and yes it is wide of the
mark in places, but it will not be ignored by those outside
mainstream macro. This is partly because he discusses issues on which
modern macro is extremely vulnerable.
The first is its treatment of data. Paul’s discussion of
identification illustrates how macroeconomics needs to use all the
hard information it can get to parameterise its models. Yet
microfounded models, the only models deemed acceptable in top journals for both theoretical and empirical analysis, are normally rather selective about the data they
focus on. Both micro and macro evidence is either ignored because it
is inconvenient, or put on a to do list for further research. This is
an inevitable result of making internal consistency an admissibility
criteria for publishable work.
The second vulnerability is a conservatism which also arises from
this methodology. The microfoundations criteria taken in its strict
form makes it intractable to model some processes: for example
modelling sticky prices where actual menu costs are a deep parameter.
Instead DSGE modelling uses tricks, like Calvo contracts. But who
decides whether these tricks amount to acceptable microfoundations or
are instead ad hoc or implausible? The answer depends a lot on
conventions among macroeconomists, and like all conventions these
move slowly. Again this is a problem generated by the
microfoundations methodology.
Paul’s discussion of real effects from monetary policy, and the
insistence on productivity shocks as business cycle drivers, is
pretty dated. (And, as a result, it completely misleads Paul Mason
here.)
Yet it took a long time for RBC models to be replaced by New
Keynesian models, and you will still see RBC models around. Elements
of the New Classical counter revolution of the 1980s still persist in some places. It
was only a few years ago that I listened to a seminar paper where the
financial crisis was modelled as a large negative productivity shock.
Only in a discipline
which has deemed microfoundations as the only acceptable way of
modelling can practitioners still feel embarrassed about including
sticky prices because their microfoundations (the tricks mentioned
above) are problematic . Only in that discipline can respected
macroeconomists argue
that because of these problematic microfoundations it is best to ignore
something like sticky prices when doing policy work: and argument
that would be laughed out of court in any other science. In no other
discipline could you have a debate
about whether it was better to model what you can microfound rather
than model what you can see. Other economists understand this, but
many macroeconomists still think this is all quite normal.
'In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see.'
ReplyDeleteWell you would; contemporary historians debate just about everything. That's the beauty of open, inter-disciplinary subjects.
Can you explain to this non-economist, what you find so disagreeable in Paul Mason's analysis? It read to me as much a criticism of groupthink as anything else. Are the models he proposes towards the end of his piece unfit for purpose?
Not disagreeable, just wrong. Paul Mason writes "Yet orthodox economic theory insists it would have no real effect if the central banks pulled all this support – since the equations tell them there is no correlation between monetary policy and output." That statement, which is the impression Paul Romer's article gives, might just have been true in a few years in the 1980s before New Keynesian theory arrived. Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world. A conversation with any mainstream macroeconomist would have put Paul Mason right on this.
DeleteThanks for your response, which I accept given your far greater knowledge of such things!
DeleteWhat about the rest of the piece, the arguments re groupthink, paradigm shifts and new models? Do you think he has a point?
When I read that I thought "what the hell is he talking about - either my understanding of mainstream macro is way out, or his is." Glad to discover it is indeed his. Always surprises me when people like Paul Mason don't run controversial technical articles like this past academic economists first. I definitely would.
Delete"Since the 1990s New Keynesian theory is now the orthodoxy, and is used by central banks around the world."
DeleteI think when it comes to making the call, NKT has less relevance than what you might believe - even if there are hordes of economists in their research departments playing around with their models. How much do you think NKT played in designing the UMP that followed the credit crunch? As far as economic theory was used at all, it was very old stuff when it was. Operation Twist, or even earlier stuff.
Simon is right. I'm an undergraduate and I've been taught a Neo-Keynesian model and have heard nothing except terse criticism of the RBC school.
Deletepewartstoat: Yes, but I think what he misses is the extent to which these things are rooted in the methodological approach.
DeleteTHis is great to read.
ReplyDeleteSurely it's only a year or two ago that you were defending microfoundations - not as the only admissible methodology, it's true, but as an essential component of 'serious' macroeconomics!
I'm really glad to see that you are now exposing the limitations of the microfoundation criterion in any policy-oriented paper. I hope this is a precursor of a more general trend to bring macroeconomic theory back into contact with the real world in which people (sorry, economic actors) behave in ways that satisfy more than a purely rational, maximising goal.
Now, it seems, all that is needed is to convince the high priests, aka journal editors, that macro is more than an internally consistent set of axioms and equations.
On the point of consistency, my argument has always been that microfoundations is a progressive research strategy, but it should not be the only research strategy. I have not changed my view on this for decades!
DeleteI didn't think his mockery of shocks did justice to the question, fun though it was, but I do think that when you combine models in which mechanisms that determine what happens to observable variables are driven by possibly fictional possibly meaningful shocks, and the methods by which you identify those shocks are weak, then you are at risk of continuing to work with fantastical nonsense.
ReplyDeleteI think it is reasonable to think that there are things which happen in the economy which we cannot model but would impinge on the behaviour being modeled and hence insert shock processes in the relevant equations, but if you start working backwards from the observed data and the mechanisms assumed by the model, to conclude that *these* shocks are driving everything, then you are not in a good place. And if you really do think that unmodeled factors in whatever equation are so important, it would be nice to provide a bit more evidence as to what they are in reality.
Once upon a time we had exogenous and endogenous variables, and only endogenous variables were stochastic. Comparative statics involved changing exogenous variables. Now everything is an endogenous process, and comparative statics involves shocks. I cannot see why this is a big deal.
DeleteI think the problem is when you conclude that recessions are, for example, "really" being driven by, say, investment price shocks, because you have 'identified' large shocks there, and 'identified' minor shocks elsewhere. Does that make sense?
DeleteReally, I think the most important criticism that Romer puts forward is the one related to the (fantastical) shocks and I think that's the one we have to clarify. I do not really understand what he aims at with that criticism. On twitter I asked him and Kocherlakota for clarification about this, arguing that if we have models without endogenous cycles or chaos, we need to have shocks to preferences or production functions to have fluctuations. Unless, we want to say that all fluctuations are generated by policy shocks. Kocherlakota replied that shocks to beliefs about other peoples' decisions could be considered. Romer said that we should find what causes fluctuations and capture that in a model, which to me is the same as saying that fluctuations should be obtained endogenously, so in a deterministic fashion. But maybe I am missing something. I do not think this makes much sense, we may be unable to understand what causes demand shocks, for instance, but we may want to understand the reaction of the economy to such shocks. So it may be legitimate to assume a shock to preferences that generates an increase in spending today, even if it is probably not true that literally people change their preferences at once. I would like to know what you think about this.
DeleteImagine if all of physics had to be "microfounded" on quantum mechanics or that all of biology had to be "microfounded" on chemistry. We would have no theory of relativity or of natural selection.
ReplyDeleteHow can central banks be Keynesian (new. Or otherwise) without control of fiscal matters?
ReplyDeleteBecause they have a different definition of Keynesian to yours.
DeleteThe Lucas critique basically boiled down to: mainstream econometricians are trying to estimate parameters that don't even exist, since relationships between macro variables are not physical constants but merely the observed consequences of micro-level choices. But models in the New Classical tradition, including New Keynesian DSGE models, (almost?) always incorporate the concept of a representative agent. Now, since nobody imagines that human preferences are identical and quasi-homothetic, we know that this representative agent cannot be a real thing in the world, and the parameters of its utility function have no more "reality" to then than the aggregate correlations found in old-mainstream or "Paleo-Keynesian" models.
ReplyDeleteSo in what respect are NK models an "improvement" on their forbears?
I agree, which is what my Review of Keynesian Economics article is partly about.
Delete"parameters that don't even exist"
Deletethat's a good one -- ask God if they don't exist!
A very clever blog post of microfoundations:
ReplyDeletehttps://meansquarederrors.blogspot.de/2016/09/the-microfoundations-hoax.html
Deep parameters ...... This is a very self deceiving term of art in macro modeling
ReplyDeleteCan I ask an (I am sure) extremely naïve question? Why doesn't someone derive micro-foundations from macroeconomic theory (marked to market)? If it were me, I would proceed in two steps analogous to quantum theory in physics. I would first take observed behavior (including irrationality, lack of information, and the fact that most companies today are services rather than manufacturing ones) as the "low-energy" state. I would then derive average behavior as the "high-energy" state, again "marked to market". This would allow me to real-world test my resulting microfoundations. I might incorporate considerations of "if this goes on" to capture things like Minsky-moment behavior, and I might also go back and see what questions the resulting microfoundations raise about my macroeconomic model. I might even crowdsource the effort.
ReplyDeleteWhat I like about this approach (if it is at all feasible) is that it puts the onus squarely back on the journals. They want microfoundations; you have given them some, marked to market. They say these are inferior to their microfoundations; the onus is on them to prove it, and you have real-world data on your side.