There have been two strands of reaction to my last post. One has been to interpret it as yet
another salvo in the macro wars. The second has been to deny there is an issue
here: to quote Tony Yates: “The pragmatic microfounders
and empirical macro people have won out entirely”. If people are confused,
perhaps some remarks by way of clarification might be helpful.
There are potentially three different debates going on here.
The first is the familiar Keynesian/anti-Keynesian debate. The second is whether
‘proper’ policy analysis has to be done with microfounded models, or whether
there is also an important role for more eclectic (and data-based) aggregate
models in policy analysis, like IS-LM. The third is about how far
microfoundation modellers should be allowed to go in incorporating
non-microfounded (or maybe behavioural) relationships in their models.
Although all three debates are important in their own right, in
this post I want to explore the extent to which they are linked. But I want to
say at the outset what, in my view, is not up for debate among mainstream
macroeconomists: microfounded macromodels are likely to remain the mainstay of
academic macro analysis for the foreseeable future. Many macroeconomists
outside the mainstream, and some other economists, might wish
it otherwise, but I think they are wrong to do so. DSGE models really do
tell us a lot of interesting and important things.
For those who are not economists, let’s be clear what the
microfoundations project in macro is all about. The idea is that a macro model
should be built up from a formal analysis of the behaviour of individual agents
in a consistent way. There may be just a single representative agent, or
increasingly heterogeneous agents. So a typical journal paper in
macro nowadays will involve lots of optimisation by individual agents as a way
of deriving aggregate relationships.
Compare this to two alternative ways of ‘doing macro’. The
first goes to the other extreme: choose a bunch of macro variables, and just
look at the historic relationship between them (a VAR). This uses minimal
theory, and the focus is all about the past empirical interaction between macro
aggregates. The second would sit in between the two. It might start off with
aggregate macro relationships, and justify them with some eclectic mix of
theory and empirics. You can think of IS/LM as an example
of this third way. In reality there is probably a spectrum of alternatives
here, with different mixes between theoretical consistency and consistency with the data (see this post).
In the 1960s and 1970s, a good deal of macro analysis in
journals was of this third type. The trouble with this approach, as New
Classical economists demonstrated, was that the theoretical rationale behind
equations often turned out to be inadequate and inconsistent. The Lucas
critique is the most widely quoted example where this happens. So the
microfoundations project said let’s do the theory properly and rigorously, so
we do not make these kind of errors. In fact, let’s make theoretical
(‘internal’) consistency the overriding aim, such that anything which fails on
these grounds is rejected. There were two practical costs of this approach.
First, doing this was hard, so for a time many real world complexities had to
be set aside (like the importance of banks in rationing credit, for
example, or the reluctance of firms to cut nominal wages). This led to a
second cost, which was that less notice was taken of how each aggregate macro
relationship tracked the data (‘external’ consistency). To use a jargon phrase that
sums it up quite well: internal rather than external consistency became the
test of admissibility for these models.
The microfoundations project was extremely successful, such
that it became generally accepted among most academics that all policy analysis
should be done with microfounded models. However I think macroeconomists are
divided about how strict to be about microfoundations: this is the distinction
between purists and pragmatists that I made here. Should every part of a model be
microfounded, or are we allowed a bit of discretion occasionally? Plenty of
‘pragmatic’ papers exist, so just referencing a few tells us very little. Tony
Yates thinks the pragmatists have won, and I think David Andolfatto in a comment on my post agrees. I would like to think
they are right, but my own experience talking to other macroeconomists suggests
they are not.
But let’s just explore what it might mean if they were right.
Macroeconomists would be quite happy incorporating non-microfounded elements
into their models, when strong empirical evidence appeared to warrant this. Referees
would not be concerned. But there is no logical reason to only include one
non-microfounded element at a time: why not allow more than one aggregate
equation to be data rather than theory based? In that case, ‘very pragmatic’
microfoundation models could begin to look like the aggregate models of the
past, which used a combination of theory and empirical evidence to justify
particular equations.
I would have no problem with this, as I have argued
that these more eclectic aggregate models have an important role to play
alongside more traditional DSGE models in policy analysis, particularly
in policy making institutions that require flexible
and robust
tools. Paul Krugman is fond of suggesting that IS-LM type models are more
useful than microfounded models, with the latter being a check on the former,
so I guess he wouldn’t worry about this either. But others do seem to want to
argue that IS-LM type models should have no place in ‘proper’ policy analysis,
at least in the pages of academic journals. If you take this view but want to
be a microfoundations pragmatist, just where do you draw the line on pragmatism?
I have deliberately avoided mentioning the K word so far. This
is because I think it is possible to imagine a world where Keynesian economics had
not been invented, but where debates over microfoundations would still take
place. For example, Heathcote et al talk about modelling ‘what you
can microfound’ versus ‘modelling what you can see’ in relation to the
incompleteness of asset markets, and I think
this is a very similar purist/pragmatist microfoundations debate, but there is
no direct connection to sticky prices.
However in the real world where thankfully Keynesian economics
does exist, I think it becomes problematic to be both a New Keynesian and a
microfoundations purist. First, there is Paul Krugman’s basic point.
Before New Keynesian theory, New Classical economists argued that because
sticky wages and prices were not microfounded, they should not be in our
models. (Some who are unconvinced by New Keynesian ideas still make
that case.) Were they right at the time? I think a microfoundations purist
would have to say yes, which is problematic because it seems an absurd position for a Keynesian to take. Second, in this paper I
argued that the microfoundations project, in embracing sticky prices, actually
had to make an important methodological compromise which a microfoundations
purist should worry about. I think Chari, Kehoe and
McGrattan are making similar
kinds of points. Yet my own paper arose out of talking to New Keynesian
economists who appeared to take a purist position, which was why I wrote it.
It is clear what the attraction of microfoundations purity was
to those who wanted to banish Keynesian theory in the 1970s and 1980s. The
argument of those who championed rational expectations and intertemporal
consumption theory should have been: your existing [Keynesian] theory is full
of holes, and you really need to do better – here are some ideas that might
help, and let’s see how you get on. Instead for many it was: your theory is
irredeemable, and the problems you are trying to explain (and alleviate) are
not really problems at all. In taking that kind of position it is quite helpful
to follow a methodology where you get rather a lot of choice over what
empirical facts you try and be consistent with.
So it is clear why the microfoundations debate is mixed up with
the debate over Keynesian economics. It also seems clear to me that the
microfoundations approach did reveal serious problems with the Keynesian
analysis that had gone before, and that the New Keynesian analysis that has
emerged as a result of the microfoundations project is
a lot better for it. We now understand more about the dynamics of inflation
and business cycles and so monetary policy is better. This shows that the
microfoundations project is progressive.
But just because a methodology is progressive does not imply
that it is the only proper way to proceed. When I wrote that focusing on
microfoundations can distort the way macroeconomists think, I was talking about
myself as much as anyone else. I feel I spend too much time thinking about
microfoundations tricks, and give insufficient attention to empirical evidence
that should have much more influence on modelling choices. I don’t think I can
just blame anti-Keynesians for this: I would argue New Keynesians also need to
be more pragmatic about what they do, and more tolerant of other ways of
building macromodels.