As the Bank of England finally publishes
[1] its new core model COMPASS, I have been thinking more about what the
central bank’s core model should look like. (This post
from Noah Smith I think reacts indirectly to the same event.) Here I will not
talk in terms of the specification of individual equations, but the methodology
on which the model is based.
In 2003, Adrian Pagan produced a report
on modelling and forecasting at the Bank of England. It included the following
diagram.
One interpretation is that the Bank has a fixed amount of
resources available, and so this curve is a production possibility frontier. Although
Pagan did not do so, we could also think of policymakers as having conventional
preferences over these two goods: some balance between on the one hand knowing
a forecast or policy is based on historical evidence and on the other that it
makes sense in terms of how we think people behave.
I think there are two groups of macroeconomists who will
feel that this diagram is nonsense. The first I might call idealists. They will
argue that in any normal science data and theory go together – there is no
trade-off. The second group, which is I will call purists, will recognise two
points on this curve (DSGE and VARs), but deny that there is anything in
between. I suspect most macroeconomists under 40 will fall into this group.
The purists cannot deny that it is possible to construct hybrid
models that are an eclectic mix of some more informal theory and rather more estimation
that DSGE models involve, but they will deny that they make any sense as
models. They will argue that a model is either theoretically coherent or it is
not – we cannot have degrees of theoretical coherence. In terms of theory,
there are either DSGE models, or (almost certainly) incorrect models.
At the time Pagan wrote his report, the Bank had a hybrid
model of sorts, but it was in the process of constructing BEQM, which was a combination
of a DSGE core and a much more data based periphery. (I had a small role in the
construction of both BEQM and its predecessor: I describe the structure of BEQM
here.)
It has now moved to COMPASS, which is a much more straightforward DSGE
construct. However judgements can be imposed on COMPASS, reflecting a vast
range of other information in the Bank’s suite of models, as well as inputs
from more informal reasoning.
The existence of a suite of models that can help fashion
judgements imposed on COMPASS may guard against large errors, but the type of
model used as the core means of producing forecasts and policy advice remains
significant. Unlike the idealists I recognise that there is a choice between
data and theory coherence in social science, and unlike the purists I believe
hybrid models are a valid
alternative to DSGE models and VARs. So I think there is an optimum point
on this frontier, and my guess is that DSGE models are not it. The basic reason
I believe this reflects the need policymakers have to adjust reasonably quickly
to new data and ideas, and I have argued this case in a previous post.
Yet I suspect it will take a long time before central banks
recognise this, because most macroeconomists are taught that such hybrid models
are simply wrong. If you are one of those economists, probably the best way I can
persuade you that this position is misguided is to read this paper from
Chris Carroll. [2] It discusses Friedman’s account of the permanent income
hypothesis (PIH). For many years graduate students have been taught that while PIH
was a precursor to the intertemporal model that forms the basis of modern
macro, Freidman’s suggestion that the marginal propensity to consume out of
transitory income might be around a third, and that permanent income was more
dependent on near future expectations than simple discounting would suggest,
were unfortunate reflections of the fact that he didn’t do the optimisation
problem formally.
Instead, Carroll suggests that the PIH may be reasonable
approximation to how an optimising consumer might behave as they anticipate the
inevitable credit constraints that come with old age. There is also a lot of
empirical evidence that consumers do indeed quickly consume something like a
third of unexpected temporary income shocks. In other words Friedman’s mix of
theoretical ideas and empirical evidence would have done rather better at
forecasting consumption than anything microfounded that has supplanted it. If
that can be true for consumption, it could be true for every other macroeconomic
relationship, and therefore a complete macromodel.
[1] A short rant about the attitude of the Bank of England
to the outside world. COMPASS has been used for almost two years, yet it has
only just been published. I have complained about this before,
so let me just say this. To some senior officials at the Bank, this kind of lag
makes perfect sense: let’s make sure the model is fit for purpose before
exposing it to outside scrutiny. Although this may be optimal in terms of
avoiding Bank embarrassment and hassle, it is obviously not optimal in terms of
social welfare. The more people who look at the model, the sooner any problems
may be uncovered. I am now rather in favour
of delegation in macroeconomics, but delegation must be accompanied by the
maximum possible openness and accountability.
[2] This recently published interview
covers some related themes, and is also well worth reading (HT Tim
Taylor).