I gave a short talk yesterday with this title, which takes some of the main points from my paper in the OXREP 'Rebuilding Macro' volume It is mainly of interest to economists, or those interested in economic methodology or the history of macroeconomic thought. When I talk about macroeconomics and macroeconomists below I mean mainstream academic economists.
I want to talk today
about where macroeconomics went wrong. Now it seems that this is a
topic where everyone has a view. But most of those views have a common
theme, and that is a dislike of DSGE models. Yet DSGE models are
firmly entrenched in academic macroeconomics, and in pretty well every
economist that has done a PhD, which is why the Bank of England’s
core model is DSGE. To understand why DSGE is so entrenched, I need
to tell the story of the New Classical Counter Revolution (NCCR).
If you had to pick a
paper that epitomised the NCCR it would be “After Keynesian
Macroeconomics” by Lucas and Sargent. Now from the title you would
think this was an attack on Keynesian Economics, and in part it was.
But we know that revolution failed. Very soon after the NCCR we
had the birth of New Keynesian economics that recast key aspects of Keynesian economics within a microfoundations [1] framework, and is now the way nearly
all central banks think about stabilisation policy. But if you read the
text of Lucas and Sargent it is mainly a manifesto about how to do
macroeconomics, or what I think we can reasonably call the
methodology of macroeconomics.And on that their revolution was
successful, and it is why nearly all academic macro is DSGE.
Before Lucas and
Sargent complete macroeconomic models, of both a theoretical and
empirical kind, had justified their aggregate equation using an
eclectic mix of theory and econometrics. Microfoundations were used
as a guide to aggregate equation specification, but if this equation
fell apart in statistical terms when confronted with data in would
not become part of an empirical model, and would be shunned for
inclusion in theoretical models. Off course ‘falling apart ‘ is a
very subjective criteria, and every effort would be made to try and
make an equation consistent with microfoundations, but typically a
lot of the dynamics in these models were what we would now call ad
hoc, which in this case meant data based. .
Lucas famously
showed that models of this kind were subject to what we call the
Lucas critique [2], and that forms an important part of Lucas and Sargent
paper. They argue that the only certain way to get round that
critique is to build the model from internally consistent
microfoundations. But they also ask why wouldn’t you want to build
any macroeconomic model that way? Why wouldn’t you want a model
where you could be sure that aggregate outcomes were the result of
agents behaving in a consistent manner
If you want to
crystallise why this was a methodological revolution, think about
what we might call admissibility criteria for macro models. In
pre-NCCR models equations were selected through an eclectic mixture
of theory-fit and evidence-fit. In the RBC and later DSGE models internal
theoretical consistency is an admissibility criteria. Or to put it
another way, a DSGE model never got rejected because one of its
equations didn’t fit the data, but if one equation had a
theoretical foundation that was inconsistent with the others it would
certainly not be published in the better journals.
Have a look at
almost any macro paper in a top journal today, and compare it to a
similar paper before the NCCR, and you can see we have been through a
methodological revolution. Unfortunately many economists who have
only been taught and who only known DSGE just think of this as
progress. But it is not just progress, because DSGE models involve a
shift away from the data. This is inevitable if you change the
admissibility criteria away from the data. It inevitably means
macroeconomists start focusing on models where it is easy to ensure internal theoretical consistency, and away from macroeconomic
phenomenon that are clear in the data but more difficult to
microfound.
If you are expecting
me at this point to say that DSGE models where were macroeconomics
went wrong, you will be disappointed. I spent the last 15 years of my
research career building and analysing DSGE models, and I learnt a
lot as a result. The mistake was the revolution part. In the US, DSGE
models replaced traditional modelling within almost a decade [3]. In my
view DSGE models should have coexisted with more traditional
modelling, each tolerating the other.
To get a glimpse of
how that can happen look at the UK, where a traditional
macromodelling scene remained active until the end of the millenium.
Traditional models didn’t stand sill, but changed by adopting many
of the ideas from DSGE such as rational expectations. Here the
account gets a little personal, because before I did DSGE I built one
of those models, called COMPACT. There are not many macroeconomists
who have built and operated both traditional and DSGE models, which I
think gives me some insight of the merits of both.
COMPACT was a
rational expectations New Keynesian model with explicit credit
constraints in a Blanchard-Yaari type consumption function, a vintage
production model, and variety effects on trade. So in terms of
theoretical ideas it was far richer than any DSGE model I
subsequently worked with. Most of COMPACT’s behavioural equations
were econometrically estimated, but it was not an internally
consistent model like DSGE.
COMPACT had an
explicit but exogenous credit constraint variable in the model
because in our view it was impossible to model consumption behaviour
over time without it. Our work was based heavily on work in the UK by
John Muellbauer, and Chris Carroll was coming to similar conclusions
for the US. But DSGE models never faced that issue because they
worked with de-trended data. Let me spell out why that was important.
Empirical work was establishing that you could not begin to understand consumption behaviour over a 20/30 year time horizon without seeing how the financial
sector had changed over time, and at least one traditional
macroeconomic model was incorporating that finding before the end of the last millennium.. Extensive work on
exactly that issue did not begin using DSGE models until after the
financial crisis, where changes in the financial sector had a critical impact on the real economy. DSGE was behind the curve, but more traditional macroeconomics was not. .
Now I don’t think it is
fanciful to think that if at least some macroeconomists had continued
working with more traditional data-based models alongside those doing
DSGE, at least one of those models would have thought to endogenise
the financial sector which was determining those varying credit
constraints.
So the claim I want
to make is rather a big one. If DSGE models had continued alongside
more traditional, data-based modelling, economists would have been
much better prepared for the financial crisis when it came. If these two methodologies had learnt from each other, DSGE models might have started focusing on the financial sector before the crisis. Of course
I would never suggest that macroeconomics could have predicted that crisis, but
macroeconomists would have certainly had much more useful things to
say about the impact on the economy when it happened.
Just being able to
imagine this being true illustrates that moving to DSGE involved
losses as well as gains. It inevitably made models less rich and
moved them further away from the data in areas that were difficult
but not impossible to model in a theoretically consistent way. The
DSGE methodological revolution set out so clearly in Lucas and
Sargent's paper changed the focus of macroeconomics away from things
we now know were of critical importance.
I’ve been talking
about this since I started writing a blog at the end of 2011, but
recently we have seen similar messages from Paul Romer and Olivier
Blanchard in this OxREP volume. What I have called here traditional
models, and in the paper I call Structural Econometric Models,
Blanchard calls provocatively policy models. It was provocative
because most academic macroeconomists think DSGE models are the only
models that can do policy analysis ‘properly’, but Blanchard suggests policymakers want models that are closer to the data more than they
want a guarantee of internal consistency, and they want models that are quick and
easy to adapt to unfolding problems. The US Fed, although it has a
DSGE model, also has a more traditional model that has
similarities to a traditional model like COMPACT, and guess which model plays the
major role in the policy process?
[1] Microfoundations means deriving aggregate equations from microeconomic optimisation behaviour
[2] The Lucas critique argued that many equations of traditional macroeconomic models embodied beliefs about macro policy, and so if policy changed the equations would no longer be valid.
[3] The difficulty of identification in single equation estimation highlighted by Sims in 1980 probably also contributed. .