As the Bank of England finally publishes
[1] its new core model COMPASS, I have been thinking more about what the
central bank’s core model should look like. (This post
from Noah Smith I think reacts indirectly to the same event.) Here I will not
talk in terms of the specification of individual equations, but the methodology
on which the model is based.
In 2003, Adrian Pagan produced a report
on modelling and forecasting at the Bank of England. It included the following
diagram.
One interpretation is that the Bank has a fixed amount of
resources available, and so this curve is a production possibility frontier. Although
Pagan did not do so, we could also think of policymakers as having conventional
preferences over these two goods: some balance between on the one hand knowing
a forecast or policy is based on historical evidence and on the other that it
makes sense in terms of how we think people behave.
I think there are two groups of macroeconomists who will
feel that this diagram is nonsense. The first I might call idealists. They will
argue that in any normal science data and theory go together – there is no
trade-off. The second group, which is I will call purists, will recognise two
points on this curve (DSGE and VARs), but deny that there is anything in
between. I suspect most macroeconomists under 40 will fall into this group.
The purists cannot deny that it is possible to construct hybrid
models that are an eclectic mix of some more informal theory and rather more estimation
that DSGE models involve, but they will deny that they make any sense as
models. They will argue that a model is either theoretically coherent or it is
not – we cannot have degrees of theoretical coherence. In terms of theory,
there are either DSGE models, or (almost certainly) incorrect models.
At the time Pagan wrote his report, the Bank had a hybrid
model of sorts, but it was in the process of constructing BEQM, which was a combination
of a DSGE core and a much more data based periphery. (I had a small role in the
construction of both BEQM and its predecessor: I describe the structure of BEQM
here.)
It has now moved to COMPASS, which is a much more straightforward DSGE
construct. However judgements can be imposed on COMPASS, reflecting a vast
range of other information in the Bank’s suite of models, as well as inputs
from more informal reasoning.
The existence of a suite of models that can help fashion
judgements imposed on COMPASS may guard against large errors, but the type of
model used as the core means of producing forecasts and policy advice remains
significant. Unlike the idealists I recognise that there is a choice between
data and theory coherence in social science, and unlike the purists I believe
hybrid models are a valid
alternative to DSGE models and VARs. So I think there is an optimum point
on this frontier, and my guess is that DSGE models are not it. The basic reason
I believe this reflects the need policymakers have to adjust reasonably quickly
to new data and ideas, and I have argued this case in a previous post.
Yet I suspect it will take a long time before central banks
recognise this, because most macroeconomists are taught that such hybrid models
are simply wrong. If you are one of those economists, probably the best way I can
persuade you that this position is misguided is to read this paper from
Chris Carroll. [2] It discusses Friedman’s account of the permanent income
hypothesis (PIH). For many years graduate students have been taught that while PIH
was a precursor to the intertemporal model that forms the basis of modern
macro, Freidman’s suggestion that the marginal propensity to consume out of
transitory income might be around a third, and that permanent income was more
dependent on near future expectations than simple discounting would suggest,
were unfortunate reflections of the fact that he didn’t do the optimisation
problem formally.
Instead, Carroll suggests that the PIH may be reasonable
approximation to how an optimising consumer might behave as they anticipate the
inevitable credit constraints that come with old age. There is also a lot of
empirical evidence that consumers do indeed quickly consume something like a
third of unexpected temporary income shocks. In other words Friedman’s mix of
theoretical ideas and empirical evidence would have done rather better at
forecasting consumption than anything microfounded that has supplanted it. If
that can be true for consumption, it could be true for every other macroeconomic
relationship, and therefore a complete macromodel.
[1] A short rant about the attitude of the Bank of England
to the outside world. COMPASS has been used for almost two years, yet it has
only just been published. I have complained about this before,
so let me just say this. To some senior officials at the Bank, this kind of lag
makes perfect sense: let’s make sure the model is fit for purpose before
exposing it to outside scrutiny. Although this may be optimal in terms of
avoiding Bank embarrassment and hassle, it is obviously not optimal in terms of
social welfare. The more people who look at the model, the sooner any problems
may be uncovered. I am now rather in favour
of delegation in macroeconomics, but delegation must be accompanied by the
maximum possible openness and accountability.
[2] This recently published interview
covers some related themes, and is also well worth reading (HT Tim
Taylor).
Models should work.
ReplyDeleteSo imho the first lesson that should be learnt is that if things not work do something about it and donot wait a couple of years or let political decisions get the better of you. Like we see now.
While at the same time this is an uniusual situation, some lessons can be learnt but a lot will be rather unique for this once in a century event.
Take the other extreme VAR. Works ok in approx. 95% of the cases only in the extremens there are problems. Doubtful if normal distribution should be taken, but as that is usually the only one people grasp that also has its advantages. Go for the 5% extremes in another way. Alarmbells ringing if certain things happen and alert levels depending on say substantial unbalances/dangers in the set up.
The Effective demand research is developing a new model for monetary policy. It shows that we are near the end of the business cycle, and as such, we should already have a higher Fed funds rate. The logic is that the economy has shifted down to a new level and trying to keep the same monetary framework as before, just won't work. Here are two links to the model.
ReplyDeletehttp://effectivedemand.typepad.com/ed/2013/05/monetary-policy-of-effective-demand-the-basics.html
http://effectivedemand.typepad.com/ed/2013/05/equation-for-z-coefficient-of-monetary-policy.html
Perhaps I'm an idealist, but surely the problem with "theoretical coherence" as a measure is that the "theories" we're talking about don't have a good record of prediction. So basing a model on them has limited appeal from a policy point of view. Noah Smith goes through some other possible uses...
ReplyDeleteAnother fantastic article although that maybe becoming a tautolgy in your case.
ReplyDeletemore publicity downunder therefore
The Bank of England might want to pick a different name. The US Army already has a model called COMPASS:
ReplyDeletehttps://dap.dau.mil/aphome/das/Lists/Software%20Tools/DispForm.aspx?ID=24
Of course, it does is used for very different purposes.
some really amazing ideas, technology is only getting better and really targeting customers with things like custom forms is doing a lot for banks
ReplyDeleteThanks for this post. I disagree with one of your sentence. In normal science theory and data should go together, even for idealists. Data going together with theory is a requirement of reason, not a fact. In Theory and practice, Kant explains that if theory and empirics differ it is because of a lack of theory, not because theory and empirics are different fields that do not superimpose exactly. Pagan probably does not oppose the two points of view but just measures the lack of theory.
ReplyDeleteFrom a practical point of view (make a forecast or an advice), if your theory is insufficient what should you do? Use ad hoc rules that well explain the data but you don’t know why? Keep your model because you understand the mechanism even if it delivers poor advice? Use both? Every reasonable guy will say: both! Hybrids models are useful (a weighted average of a VAR and a DSGE is a hybrid model).
From a theoretical point of view (increase knowledge), if your theory is insufficient to explain the data, do you keep a statistical model and say “done”? Do you change the assumption of your model to get closer to the data? Do you create a hybrid model? Every reasonable guy will say: change assumption! Hybrid models are a non sense here: you explain nothing with an ad hoc assumption.
Do you suggest that people under 40 ignore the difference between “make a forecast or an advice” and “increase knowledge”? I wonder because I am…