Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday, 10 October 2018

Talk on where macroeconomics went wrong


I gave a short talk yesterday with this title, which takes some of the main points from my paper in the  OXREP 'Rebuilding Macro' volume  It is mainly of interest to economists, or those interested in economic methodology or the history of macroeconomic thought. When I talk about macroeconomics and macroeconomists below I mean mainstream academic economists.  

I want to talk today about where macroeconomics went wrong. Now it seems that this is a topic where everyone has a view. But most of those views have a common theme, and that is a dislike of DSGE models. Yet DSGE models are firmly entrenched in academic macroeconomics, and in pretty well every economist that has done a PhD, which is why the Bank of England’s core model is DSGE. To understand why DSGE is so entrenched, I need to tell the story of the New Classical Counter Revolution (NCCR).

If you had to pick a paper that epitomised the NCCR it would be “After Keynesian Macroeconomics” by Lucas and Sargent. Now from the title you would think this was an attack on Keynesian Economics, and in part it was. But we know that revolution failed. Very soon after the NCCR we had the birth of New Keynesian economics that recast key aspects of Keynesian economics within a microfoundations [1] framework, and is now the way nearly all central banks think about stabilisation policy. But if you read the text of Lucas and Sargent it is mainly a manifesto about how to do macroeconomics, or what I think we can reasonably call the methodology of macroeconomics.And on that their revolution was successful, and it is why nearly all academic macro is DSGE.

Before Lucas and Sargent complete macroeconomic models, of both a theoretical and empirical kind, had justified their aggregate equation using an eclectic mix of theory and econometrics. Microfoundations were used as a guide to aggregate equation specification, but if this equation fell apart in statistical terms when confronted with data in would not become part of an empirical model, and would be shunned for inclusion in theoretical models. Off course ‘falling apart ‘ is a very subjective criteria, and every effort would be made to try and make an equation consistent with microfoundations, but typically a lot of the dynamics in these models were what we would now call ad hoc, which in this case meant data based. .

Lucas famously showed that models of this kind were subject to what we call the Lucas critique [2], and that forms an important part of Lucas and Sargent paper. They argue that the only certain way to get round that critique is to build the model from internally consistent microfoundations. But they also ask why wouldn’t you want to build any macroeconomic model that way? Why wouldn’t you want a model where you could be sure that aggregate outcomes were the result of agents behaving in a consistent manner

If you want to crystallise why this was a methodological revolution, think about what we might call admissibility criteria for macro models. In pre-NCCR models equations were selected through an eclectic mixture of theory-fit and evidence-fit. In the RBC and later DSGE models internal theoretical consistency is an admissibility criteria. Or to put it another way, a DSGE model never got rejected because one of its equations didn’t fit the data, but if one equation had a theoretical foundation that was inconsistent with the others it would certainly not be published in the better journals.

Have a look at almost any macro paper in a top journal today, and compare it to a similar paper before the NCCR, and you can see we have been through a methodological revolution. Unfortunately many economists who have only been taught and who only known DSGE just think of this as progress. But it is not just progress, because DSGE models involve a shift away from the data. This is inevitable if you change the admissibility criteria away from the data. It inevitably means macroeconomists start focusing on models where it is easy to ensure internal theoretical consistency, and away from macroeconomic phenomenon that are clear in the data but more difficult to microfound.

If you are expecting me at this point to say that DSGE models where were macroeconomics went wrong, you will be disappointed. I spent the last 15 years of my research career building and analysing DSGE models, and I learnt a lot as a result. The mistake was the revolution part. In the US, DSGE models replaced traditional modelling within almost a decade [3]. In my view DSGE models should have coexisted with more traditional modelling, each tolerating the other.

To get a glimpse of how that can happen look at the UK, where a traditional macromodelling scene remained active until the end of the millenium. Traditional models didn’t stand sill, but changed by adopting many of the ideas from DSGE such as rational expectations. Here the account gets a little personal, because before I did DSGE I built one of those models, called COMPACT. There are not many macroeconomists who have built and operated both traditional and DSGE models, which I think gives me some insight of the merits of both.

COMPACT was a rational expectations New Keynesian model with explicit credit constraints in a Blanchard-Yaari type consumption function, a vintage production model, and variety effects on trade. So in terms of theoretical ideas it was far richer than any DSGE model I subsequently worked with. Most of COMPACT’s behavioural equations were econometrically estimated, but it was not an internally consistent model like DSGE.

COMPACT had an explicit but exogenous credit constraint variable in the model because in our view it was impossible to model consumption behaviour over time without it. Our work was based heavily on work in the UK by John Muellbauer, and Chris Carroll was coming to similar conclusions for the US. But DSGE models never faced that issue because they worked with de-trended data. Let me spell out why that was important. Empirical work was establishing that you could not begin to understand consumption behaviour over a 20/30 year time horizon without seeing how the financial sector had changed over time, and at least one traditional macroeconomic model was incorporating that finding before the end of the last millennium.. Extensive work on exactly that issue did not begin using DSGE models until after the financial crisis, where changes in the financial sector had a critical impact on the real economy. DSGE was behind the curve, but more traditional macroeconomics was not. .

Now I don’t think it is fanciful to think that if at least some macroeconomists had continued working with more traditional data-based models alongside those doing DSGE, at least one of those models would have thought to endogenise the financial sector which was determining those varying credit constraints.

So the claim I want to make is rather a big one. If DSGE models had continued alongside more traditional, data-based modelling, economists would have been much better prepared for the financial crisis when it came. If these two methodologies had learnt from each other, DSGE models might have started focusing on the financial sector before the crisis. Of course I would never suggest that macroeconomics could have predicted that crisis, but macroeconomists would have certainly had much more useful things to say about the impact on the economy when it happened.

Just being able to imagine this being true illustrates that moving to DSGE involved losses as well as gains. It inevitably made models less rich and moved them further away from the data in areas that were difficult but not impossible to model in a theoretically consistent way. The DSGE methodological revolution set out so clearly in Lucas and Sargent's paper changed the focus of macroeconomics away from things we now know were of critical importance.

I’ve been talking about this since I started writing a blog at the end of 2011, but recently we have seen similar messages from Paul Romer and Olivier Blanchard in this OxREP volume. What I have called here traditional models, and in the paper I call Structural Econometric Models, Blanchard calls provocatively policy models. It was provocative because most academic macroeconomists think DSGE models are the only models that can do policy analysis ‘properly’, but Blanchard suggests policymakers want models that are closer to the data more than they want a guarantee of internal consistency, and they want models that are quick and easy to adapt to unfolding problems. The US Fed, although it has a DSGE model, also has a more traditional model that has similarities to a traditional model like COMPACT, and guess which model plays the major role in the policy process?

[1] Microfoundations means deriving aggregate equations from microeconomic optimisation behaviour

[2] The Lucas critique argued that many equations of traditional macroeconomic models embodied beliefs about macro policy, and so if policy changed the equations would no longer be valid.

[3] The difficulty of identification in single equation estimation highlighted by Sims in 1980 probably also contributed.  . 

8 comments:

  1. Where economics went wrong
    Comment on Simon Wren-Lewis on ‘Talk on where macroeconomics went wrong’

    Simon Wren-Lewis’ talk is a fine example of what may be called an epicycle explanation. Essentially, like a Ptolemaic astronomer, he argues that it was with the 23rd epicycle where an error sneaked in and this explains why the theory failed.

    Simon Wren-Lewis maintains: “The mistake was the revolution part. In the US, DSGE models replaced traditional modelling within almost a decade. In my view DSGE models should have coexisted with more traditional modelling, each tolerating the other.”

    No. Both DSGE modeling and more traditional modeling are methodologically defective and the first step on the way forward is to bury both for good at the Flat-Earth-Cemetery. As Joan Robinson put it: “Scrap the lot and start again.”

    What is needed is NOT the repair of the 23rd epicycle but a paradigm shift from false microfoundations to true macrofoundations. Nothing less will do.

    This is the state of economics. Provably false:
    • profit theory, for 200+ years,
    • microfoundations, for 140+ years,
    • macrofoundations, for 80+ years,
    • the application of elementary logic and mathematics since the founding fathers.

    The four main approaches ― Walrasianism, Keynesianism, Marxianism, Austrianism are mutually contradictory, axiomatically false, and materially/formally inconsistent. Because of this, economic policy guidance NEVER had sound scientific foundation from Adam Smith/Karl Marx onward to DSGE and New Keynesianism.

    It was Keynes who spotted the fatal flaw of mainstream economics: “For if orthodox economics is at fault, the error is to be found not in the superstructure, which has been erected with great care for logical consistency, but in a lack of clearness and of generality in the premises. (Keynes)

    In the same vein: “For it can fairly be insisted that no advance in the elegance and comprehensiveness of the theoretical superstructure can make up for the vague and uncritical formulation of the basic concepts and postulates, and sooner or later ... attention will have to return to the foundations. (Hutchison)

    Clearly, it is microfoundations that are false. The Walrasian approach is defined by this axiom set: “HC1 economic agents have preferences over outcomes; HC2 agents individually optimize subject to constraints; HC3 agent choice is manifest in interrelated markets; HC4 agents have full relevant knowledge; HC5 observable outcomes are coordinated, and must be discussed with reference to equilibrium states.” (Weintraub) Every model that is built upon HC1 to HC5 or contains a subset thereof is false.

    On the other hand, the Keynesian macrofoundations approach is defined by this set of foundational propositions: “Income = value of output = consumption + investment. Saving = income − consumption. Therefore saving = investment.” (GT, p. 63) Every model that is built upon this elementary syllogism is false, in particular, all I=S/IS-LM models and the whole of MMT.

    Because both the microfoundations approach and the macrofoundations approach are axiomatically false, roughly 90 percent of the content of peer-reviewed journals is scientifically worthless.

    A paradigm shift is imperative.#1 Who still accepts and applies Walrasian microfoundations or Keynesian macrofoundations or a combination thereof goes straight to the Flat-Earth-Cemetery.

    It is known since 2000+ years: “When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing.” (Aristotle) Economists’ premises have NEVER been certain, true and primary. Economics went wrong from the very beginning.

    Egmont Kakarot-Handtke

    #1 From false microfoundations to true macrofoundations
    https://axecorg.blogspot.com/2018/02/from-false-microfoundations-to-true.html

    ReplyDelete
  2. Hello Simon,

    What are your thoughts on Richard Werner's ideas? What do you agree/disagree on? A brief summary would be sufficient.

    Something is seriously awry with the predictive powers of most economics theories. If scientists and engineers were as inaccurate as economist most of them would be imprisoned for incompetence and negligence.

    Regards
    CH

    ReplyDelete
  3. Micro op foundations amountS
    to A CERTAIN RANGE OF ABMs

    AS MODELERS ADD MORE BOUNDED RATIONAL AGENTS WITH HETEROGENEOUS INFO BUNDLES
    And learning processes become
    Endemic
    Time adjustments become micro founded
    And facing this emergent process as
    Optimally minimized
    a macro policy agent becomes explicit
    And comparative dynamics can emerge

    ReplyDelete
  4. "If you want to crystallise why this was a methodological revolution, think about what we might call admissibility criteria for macro models. In pre-NCCR models equations were selected through an eclectic mixture of theory-fit and evidence-fit. In the RBC and later DSGE models internal theoretical consistency is an admissibility criteria. Or to put it another way, a DSGE model never got rejected because one of its equations didn’t fit the data, but if one equation had a theoretical foundation that was inconsistent with the others it would certainly not be published in the better journals."


    Models can't predict anything they can only reaffirm what has already happened. To rely on models is no better than believing in Crystal Balls.

    Mark Blyth makes that very point here - talking to City Analysts:

    https://www.youtube.com/watch?v=lq3s-Ifx1Fo




    ReplyDelete
  5. Thank you for your careful, considerate post. I agree with you that macro went down a methodological constraint unnecessarily, albeit for some fairly good reasons (I mean, who doesn't want internally consistent, 'micro-founded' and dynamic models?), and it's clear, in retrospect, how much that was a mistake.

    Akin to Rodrik's arguments, diversity of models and approaches should be a welcome thing in all of economics, where there isn't, really, a single unified starting point that will work well always unlike, say, classic (or even quantum!) mechanics. This should be truer of macro too, as nice as it is to have DSGE for everything, when you think about it, it wasn't even designed to solve a lot of issues that we're interested in... and as for de-trended data, oh boy...

    I feel that we are making progress in that respect, and more approaches are being considered, but we really need people on top to both new approaches to macro, as well as trying them themselves. As a young, academic macro person, I feel discouraged to try taking very bold approaches due to tenure concerns.

    ReplyDelete
  6. Obviously, you disagree with the strong perspective that macroeconomics involves emergent phenomena that cannot be modeled from an agent level. Why should anyone believe otherwise, if you wish to comment, and what good are models that don't fit dsta?

    Why have an Euler equation, for example, when it doesn't fit data?

    ReplyDelete
  7. You do not address the glaring flaw in the claims of rigour for DGSE models: that the foundations on which they are built are of sand.

    Real human beings are quite complicated. They are loss-averse, biased by framing and priming, generous, envious, vengeful, altruistic, communitarian, and tribal. They have notions of fairness in labour contracts and of just prices and queue priority in shortages. They act at all times under incomplete and asymmetric information, and know this is true of others.

    The construct of economic man as rational agent is simply false as a generalisation. Building an economic model on that basis is little better than building a climate model on the micro-basis of impetus and phlogiston, or a ecological one on Lamarckian adaptation.

    The best one can say of economic man is that he does reflect one important aspect of human nature, and is mathematically tractable. We don't know enough about the other aspects to model them and estimate plausible parameters. Fine, go ahead. But the assumption is heroic adhoccery, exactly on a par with Keynes' consumption function and animal spirits.

    Is there any methodological virtue in consistency with one set of arbitrary ad hoc assumptions rather than another? I can't see any. If you can get a better fit to the data, use whatever works.

    The DGSE programme does have a quite different attraction. The rational agent slides under the counter from being a technically useful selection from the palette of human behaviour to being the only way humans behave. The policy inferences acquire the moral force of inevitability.

    Critics would say that the economists who do this are ideologues: their model of a human reflects the values and behaviour of the capitalist entrepreneurs of whom they are (Roman) clients. As a human ideal, it's simply contemptible. As a way of doing science. i's blinkered scholasticism.

    ReplyDelete
  8. Bernanke talks about the frb - us model ,
    the fed s " workhorse model " .
    it had " various ad hoc adjustments , based on historical case
    studies , anecdotes , and judgement " ,
    but it still provided little guidance to staff on
    " how to think about the likely economic effects
    of the crisis " .
    even if the staff at the fed had correctly forecast all the
    financial variables in the model
    [ maybe correctly guessing evolution of financial sector
    conditions ]
    the frb - us model would still have significantly under predicted
    the magnitude , and speed of the rise of , unemployment .
    so was this model , used to make predictions in august 2008
    [ peak unemployment = 6 % ]
    October 2008 [ peak unemployment 7 . 25 % ]
    a traditional model or dgse ?

    Bernanke says ,
    that although the frb - us , had an extensive financial sector ,
    it did not include much role for " credit factors " .
    the original dsge comment was ,
    " we didn't model the banking sector " .
    now its getting more complicated .

    Bernanke and others were responsible for
    credit literature / financial accelerator literature ,
    which focused on normal business cycles .
    historical modelling had looked at the post war period
    [ no major financial crisis ] .
    he also mentions technical reasons why " credit factors "
    were not included [ in the standard model ]

    this explains a " blind spot " .
    economists were in no position to deal with
    a major financial crisis .
    either for lack of " credit literature " ,
    lack of historical work ,
    or technical reasons .
    so there were what Bernanke calls " deficiencies " in economics .
    in the credit literature ,
    and the historical work .
    you add >>> models failed to progress ,
    so that the financial sector wasn't endogenized .
    this sector determined credit constraints .
    no credit . no money . no consumption .
    this , in the u k ,
    was due to either reduced activity , or zero activity ,
    in the traditional modelling area .
    you mention traditional models using rational expectations .
    not sure if you are saying that is a plus point
    [ diversity , evolution ]
    or a bad point .
    I believe you blame lack of activity in the traditional
    model area , on something internal to economics .
    journal editing policy .

    others would note external factors .
    you say traditional modelling slowed ,
    or was phased out , in the u k ,
    in 2000 .
    this was when the financial services and markets act was passed ,
    the fsa was set up , and light touch regulation began .
    Bernanke says the u s housing boom began after 2000 .
    3 of the big 4 u k universal banks subsequently played
    a big role in it .
    Bernanke blames the crisis itself on ,
    failure of state and private regulation .
    if u s and u k policy was to take risks with financial stability ,
    it would be convenient if the economics profession ,
    was safely out of the way .
    which of course it was

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.