When David Vines
asked me to contribute to a OXREP (Oxford Review of Economic Policy)
issue
on “Rebuilding Macroeconomic Theory”, I think what he hoped I
would write on how the core macro model needed to change to reflect
macro developments since the crisis with a particular eye to
modelling the impact of fiscal policy. That would be an interesting
paper to write, but I decided fairly quickly that I wanted to say
something that I thought was much more important.
In my view the
biggest obstacle to the advance of macroeconomics is the hegemony of
microfoundations. I wanted at least one of the papers in the
collection to question this hegemony. It turned out that I was not
alone, and a few papers did the same. I was particularly encouraged
when Olivier Blanchard, in blog posts reflecting his thoughts before
writing his contribution, was thinking along the same lines.
I will talk about
the other papers when more people have had a chance to read them.
Here I will focus on my own contribution. I have been pushing a
similar line in blog posts for some time, and that experience
suggests to me that most macroeconomists working within the hegemony
have a simple mental block when they think about alternative
modelling approaches. Let me see if I can break that block here.
Imagine a DSGE
model, ‘estimated’ by Baynesian techniques. To be specific,
suppose it contains a standard intertemporal consumption function.
Now suppose someone adds a term into the model, say unemployment into
the consumption function, and thereby significantly improves the fit
of the model. It is not hard to think why the fit significantly
improves: unemployment could be a proxy for the uncertainty of labour
income, for example. The key question becomes which is the better
model with which to examine macroeconomic policy: the DSGE or the
augmented model?
A microfoundations
macroeconomist will tend to say without doubt the original DSGE
model, because only that model is known to be theoretically
consistent. (They might instead say that only that model satisfies
the Lucas critique, but internal consistency is the more general
concept.) But an equally valid response is to say that the original
DSGE model will give incorrect policy responses because it misses an
important link between unemployment and consumption, and so the
augmented model is preferred.
There is absolutely
nothing that says that internal consistency is more important than
(relative) misspecification. In my experience, when confronted with
this fact, some DSGE modellers resort to two diversionary tactics.
The first, which is to say that all models are misspecified, is not
worthy of discussion. The second is that neither model is
satisfactory, and research is needed to incorporate the unemployment
effect in a consistent way.
I have no problem
with that response in itself, and for that reason I have no problem
with the microfoundations project as one way to do
macroeconomic modelling. But in this particular context it is a
dodge. There will never be, at least in my lifetime, a DSGE model
that cannot be improved by adding plausible but potentially
inconsistent effects like unemployment influencing consumption. Which
means that, if you think models that are significantly better at
fitting the data are to be preferred to the DSGE models from whence
they came, then these augmented models will always beat the DSGE model
as a way of modelling policy.
What this question
tells you is that there is an alternative methodology for building
macroeconomic models that is not inferior to the microfoundations
approach. This starts with some theoretical specification, which
could be a DSGE model as in the example, and then extends it in ways
that are theoretically plausible and which also significantly improve
the model’s fit, but which are not formally derived from
micofoundations. I call that an example within the Structural
Econometric Model (SEM) class, and Blanchard calls it a Policy Model.
An important point I
make in my paper is that these are not competing methodologies, but
instead they are complementary. SEMs as I describe them here start
from microfounded theory. (Of course SEMs can also start from
non-microfounded theory, but the pros and cons of that is a different
debate I want to avoid here.) As a finished product they provide many
research agendas for microfoundation modelling. So DSGE modelling can
provide the starting point for builders of SEMs or Policy Models, and
these models when completed provide a research agenda for DSGE
modellers.
Once you see this
complementarity, you can see why I think macroeconomics would develop
much more rapidly if academics were involved in building SEMs as well
as building DSGE models. The mistake the New Classical Counter
Revolution made was to dismiss previous ways of modelling the
economy, instead of augmenting these ways with additional approaches.
Each methodology on its own will develop much more slowly than the
two combined. Another way of putting it is that research based on SEMs is more efficient than the puzzle resolution approach used today.
In the paper, I try
to imagine what would have happened if the microfoundations project
had just augmented the macroeconomics of the time (which was SEM
modelling), rather than dismissing it out of hand. I think we have
good evidence that active complementarity between SEM and
microfoundations modelling would have investigated in depth links
between the financial and real sectors before the financial
crisis. The microfoundations hegemony chose the wrong puzzles to look at, deflecting macroeconomics from the more important empirical issues. The same thing may happen again if the microfoundations hegemony continues.