Thursday, 19 December 2013

More on the illusion of superiority

For economists, and those interested in methodology

Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.

My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.

But what about the Lucas critique? Surely that says that only a microfounded model can avoid the Lucas critique. Tony says we might not need to worry about the Lucas critique if policy changes are consistent with what policy has done in the past. I do not need this, so let’s make our policy changes radical. My argument still stands. The reason is very simple. A misspecified model can produce bad policy. These misspecification errors may far outweigh any errors due to the Lucas critique. Robert Waldmann is I think making the same point here. (According to Stephen Williamson, even Lucas thinks that the Lucas critique is used as a bludgeon to do away with ideas one doesn't like.)

Stephen thinks that I think the data speaks directly to us. What I think is that the way a good deal of research is actually done involves a constant interaction between data and theory. We observe some correlation in the data and think why that might be. We get some ideas. These ideas are what we might call informal theory. Now the trouble with informal theory is that it may be inconsistent with the rest of the theory in the model - that is why we build microfounded models. But this takes time, and in the meantime, because it is also possible that the informal theory may be roughly OK, I can incorporate it in my eclectic model.[1] In fact we could have a complete model that uses informal theory - what Blanchard and Fischer call useful models. The defining characteristic of microfounded models is not that they use theory, but that the theory they use can be shown to be internally consistent.

Now Tony does end by saying “ad-hoc modifications seem attractive if they are a guess at what a microfounded model would look like, and you are a policymaker who can’t wait, and you find a way to assess the Lucas-Critique errors you might be making.” I have dealt with the last point – it’s perfectly OK to say the Lucas critique may apply to my model, but that is a price worth paying to use more evidence than a microfounded model does to better guide policy. For the sake of argument let’s also assume that one day we will be able to build a microfounded model that is consistent with this evidence. (As Noah says, I’m far too deferential, but I want to persuade rather than win arguments.) [2] In that case, if I’m a policy maker who cannot wait for this to happen, Tony will allow me my eclectic model.

This is where time comes in. Tony’s position is that policymakers in a hurry can do this eclectic stuff, but we academics should just focus on building better microfoundations. There are two problems with this. First, building better microfoundations can take a very long time. Second, there is a great deal that academics can say using eclectic, or useful, models.

The most obvious example of this is Keynesian business cycle theory. Go back to the 1970s. The majority of microfoundations modellers at that time, New Classical economists, said price rigidity should not be in macromodels because it was not microfounded. I think Tony, if he had been writing then, would have been a little more charitable: policymakers could put ad hoc price rigidities into models if they must, but academics should just use models without such rigidities until those rigidities could be microfounded.

This example shows us clearly why eclectic models (in this case with ad hoc price rigidities) can be a far superior guide for policy than the best microfounded models available at the time. Suppose policymakers in the 1970s, working within a fixed exchange rate regime, wanted to devalue their currency because they felt it had become overvalued after a temporary burst of domestic inflation. Those using microfounded models would have said there was no point - any change in the nominal exchange rate would be immediately offset by a change in domestic prices. (Actually they would probably have asked how the exchange rate can be overvalued in the first place.) Those using eclectic models with ad hoc price rigidities would have known better. Would those eclectic models have got things exactly right? Almost certainly not, but they would have said something useful, and pointed policy in the right direction.

Should academic macroeconomists in the 1970s have left these policymakers to their own devices, and instead got on with developing New Keynesian theory? In my view some should have worked away at New Keynesian theory, because it has improved our understanding a lot, but this took a decade or two to become accepted. (Acceptance that, alas, remains incomplete.) But in the meantime they could also have done lots of useful work with the eclectic models that incorporated price stickiness, such as working out what policies should accompany the devaluation. Which of course in reality they did: microfoundations hegemony was less complete in those days.

Today I think the situation is rather different. Nearly all the young academic macroeconomists I know want to work with DSGE models, because that is what gets published. They are very reluctant to add what might be regarded as ad hoc elements to these models; however strong the evidence and informal theory might be that could support any modification. They are also understandably unclear about what counts as ad hoc and what does not. The situation in central banks is not so very different.

This is a shame. The idea that the only proper way to do macro that involves theory is to work with fully microfounded DSGE models is simply wrong. I think it can distort policy, and can hold back innovation. If our DSGE models were pretty good descriptions of the world then this misconception might not matter too much, but the real world keeps reminding us that they are not. We really should be more broad minded. 

[1] Suppose there is some correlation in the past that appears to have no plausible informal theory that might explain it. Including that in our eclectic model would be more problematic, for reasons Nick Rowe gives.


[2] I suggest why this might not be the case here. Nick Rowe discusses one key problem, while comments on my earlier post discuss others.

19 comments:

  1. Good sensible post IMO.

    " Nearly all the young academic macroeconomists I know want to work with DSGE models, because that is what gets published."

    That is very worrying. I would be equally worried if 0% were working on DSGE, but 100% seems just as bad.

    And there seems to be more to macro than just DSGE models vs DSGE+ad hoc elements models. Does all macro have to be model-building? Some of them should be thinking about something else.

    Oh well. Looking on the bright side: that leaves less competition for us old guys who don't want to do that!

    ReplyDelete
  2. Engineers have know this as long as there have been engineers. There are no delusions about being a research physicist. The work is firmly founded upon science but the practical applications always involve things like empirically derived coefficients and relationships and well-tested rules of thumb. Engineers don't wait for perfectly constructed theories before making decisions. Bridges and dams were built long before we even had physics.

    ReplyDelete
  3. "Bridges and dams were built long before we even had physics."

    There were always physics. Those that think we invented physics in the last 500 years are maybe ignorant. Old physics was more "analog" than quantitative but it worked for things like the Parthenon, the pyramids and bridges. http://philsci-archive.pitt.edu/10144/

    ReplyDelete
  4. When human labor is constantly being displaced by machines, involuntary unemployment rises and the DSGE models are destroyed. Those that are displayed instead of maximizing some utility and being rational, they try to minimize the utility of others and act irrationally on purpose. This is the new digital world

    ReplyDelete
  5. Once again, a very very good post.

    Just one thing: You say that it would have been OK to include price rigidity before it had a micro foundation – but you do not say what additional benefit those micro foundations brought once in place. What do they teach us that an ad hoc solution would not?

    ReplyDelete
    Replies
    1. That current inflation depends on expected inflation next period, rather than expected inflation in the same period. If there is any element of rationality in expectations, that matters

      Delete
  6. PS: This whole discussion also lack the term opportunity cost. Nick Rowe will create a hundred relatively simple models in the same time it takes most economists to create a single DSGE model - and approximately no one will want to read the DSGE model when finished.

    ReplyDelete
    Replies
    1. Sometimes the extra work involved in DSGE modelling is useful, because you realise there are important issues you had not properly thought about. However sometimes microfoundations seems like a form of pedantry. Can we have the former without the latter?

      Delete
  7. SW-L:
    " For the sake of argument let’s also assume that one day we will be able to build a microfounded model that is consistent with this evidence. (As Noah says, I’m far too deferential ...)"

    Even if one assumes this, (loathe though one is to encourage faith based reasoning), it does not imply that the said One True Model (Praise be its microfoundations) will look anything like the current generation of models, or have similar implications. The faithful usually implicitly make another assumption -- a sort of monotonicity or "smooth convergence" assumption -- that the said One True Model will be more detailed, more precise and more realistic than their current models, but will not have implications significantly different from their current favourite models. This is pure faith, without any logical or empirical backing.

    Kevin Hoover (talking of Woodford's book):
    " ... eschatological justification: the current models are to be believed, not because of what we can demonstrate about their current success, but because they are supposed to be the ancestors of models – not now existing – that, in the fullness of time, will be triumphant. To state this argument is enough, in my mind, to dismiss it."

    ReplyDelete
  8. "Essentially, all models are wrong, but some are useful" George E. P. Box

    ReplyDelete
    Replies
    1. Nice old rhetoric, but what does it mean?

      Delete
  9. something is ad hoc until somebody high status has done it then it become standard practise.

    on a more optimistic note, the more things like this:
    http://elsa.berkeley.edu/~saez/galleons-NBER-july2013.pdf
    and this:
    http://people.bu.edu/chamley/papers/Demand-130903.pdf
    get published, the more scope there will be for young macro researchers to do other than DSGE

    ReplyDelete
  10. It's always struck me that a lot of "microfoundations" aren't actually microfounded. To take the example that people are talking about, Calvo pricing. Under what possible model of individual behaviour would the owner of a firm decide to do his pricing policy by picking a random number and then making a small price increase?

    The virtue of the Calvo equation is that it makes things simpler. The fact that this is thought of as a "microfoundation" rather than an "ad hoc assumption" is ideology surely

    ReplyDelete
    Replies
    1. I think it is much worse than that. As my comments on SWL's previous microfoundations post suggest, I have little time for mathematical modelling of the economy as a method of investigation, but as far as I can tell, the way Calvo pricing works is that, given the price adjustment, as much trade occurs as required at the price for one period. If I am right, this is obviously seriously unrealistic - can you imagine a trader simply accepting the result on trade of a price adjustment regardless of volume?

      When I first saw this in academic seminars, I asked my colleagues why they did not make the price adjustment a function of volume, and was told that this would make the model mathematically intractable. So, economic modelling is diverted into la-la land for the sake of mathematical convenience!

      Delete
    2. It is an 'as if' justification - see the paper I wrote in Journal of Economic Methodology on just this: http://ideas.repec.org/a/taf/jecmet/v18y2011i2p129-146.html. But, if that paper is right, you are also to right to be suspicious, which is why some still think Keynesian analysis is not properly microfounded.

      Delete
    3. Thanks Simon, but I am afraid I don't have access to such academic journals.

      From my point of view, trying to understand how the economy works from mechanisms as a biologist tries to understand how an organism works, an "as if" motivation cannot qualify as a microfoundation if it is not "what people do".

      Presumably, the use of Calvo pricing engenders large fluctuations in output, because those suppliers whose prices are too low are assumed to supply as much as demanded, while those who set prices too high sell little and have no opportunity to correct that for one period, so the modellers think that they have made progress because their model produces business-cycle-like variations.

      Delete
  11. To repeat and expand a point I made on the previous microfoundations post, I believe that in many cases, to obtain conclusions from DGSE models, it is necessary to perform some mathematical simplification like "log-linearising around the steady state" then solving numerically. It seems to me that this must involve the loss of an unknown amount of the microeconomic understanding incorporated in the basic equations representing the economy. Moreover, I suspect that, mathematically sophisticated as they like to pose as being, most economists do not go into the mathematics of such solution methods, preferring to leave them to the computer package in which the model is written, like Mathematica, so they may well not have much of a feel for how much this affects their results. So, potentially, much of the microfounding effort is wasted anyway.

    It is about time that the education authorities put academic economics into "special measures", as they are wont to do with poorly performing schools these days, and installed a real scientist (biologists would perhaps be the most appropriate type) as the head of every academic department to impose reform.

    ReplyDelete
  12. Two very stupid remarks: first, who guarantee us that consumers satisfy all the properties in the first chapter of Mas-Colell? I think some advances have been made in micro but I still find it difficult to see them reflected in DSGE micro foundations. Moreover, there is not only empirical macro evidence, but also empirical micro evidence that should be considered. I feel a kind of vicious circle in which "microfoundations guys" tell us that we should model agents as infinitely lived optimiser using "as if" argument and saying that macro observations confirm it, but we should not use macro observations because they are not derived from micro foundations which are derived from the same macro data etc... Second, Lucas is probably right, but who guarantee us that micro behaviour will remain the same after policy intervention? Who guarantee us that consumers will optimise in the same way at 2 or at 10% inflation? What happens if also my utility function changes together with policies?

    ReplyDelete
  13. " If our DSGE models were pretty good descriptions of the world then this misconception might not matter too much, but the real world keeps reminding us that they are not. "

    This is the key point. If you want to discover anything real, you have to stay empirical. A model may sound whacked, but if it predicts better, it's a better model than a very elegant model. (Just ask Kepler -- those ugly ellipses vs. the elegant circles...)

    This is the basis of all of science. If economics wants to be scientific, it has to abandon models which *don't work* in favor of ones which do, even if the ones which do seem ad-hoc, confused, and internally contradictory. Physics relied on a set of models which weren't consistent for DECADES before the wave-particle duality was resolved -- and that was fine.

    DGSE for economics is garbage because no DGSE model has ever given a useful prediction.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.