Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday, 30 July 2014

Methodological seduction

Mainly for macroeconomists or those interested in economic methodology. I first summarise my discussion in two earlier posts (here and here), and then address why this matters.

If there is such a thing as the standard account of scientific revolutions, it goes like this:

1) Theory A explains body of evidence X

2) Important additional evidence Y comes to light (or just happens)

3) Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)

4) Theory B can explain X and Y

5) After a struggle, theory B replaces A.

For a more detailed schema due to Lakatos, which talks about a theory’s ‘core’ and ‘protective belt’ and tries to distinguish between theoretical evolution and revolution, see this paper by Zinn which also considers the New Classical counterrevolution.

The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?

My argument is that it does not. Arnold Kling makes the point clearly. In his stage one, Keynesian/Monetarist theory adapts to stagflation, using the Friedman/Phelps accelerationist Phillips curve. Stage two involves rational expectations, the Lucas supply curve and other New Classical ideas. As Kling says, “there was no empirical event that drove the stage two conversion.” I think from this that Paul Krugman also agrees, although perhaps with an odd quibble.

Now of course the counter revolutionaries do talk about the stagflation failure, and there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable. The key question, however, is whether points (3) and (4) are correct. On (3) Zinn argues that changes to Keynesian theory to account for stagflation were progressive rather than contrived, and I agree. I also agree with John Cochrane that this adaptation was still empirically inadequate, and that further progress needed rational expectations (see this separate thread), but as I note below the old methodology could (and did) incorporate this particular New Classical innovation.

More critically, (4) did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, or in my view the Great Depression either. Yet the NCCR was successful. So why did (5) happen, without (3) and (4)?

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example. Ironically the innovation that had allowed conventional macro to explain stagflation, the accelerationist Phillips curve, also made it appear unable to adapt to rational expectations. But if that was all, then you need to ask why New Classical ideas could have been gradually assimilated into the mainstream. Many of the counter revolutionaries did not want this (as this note from Judy Klein via Mark Thoma makes clear), because they had an (ideological?) agenda which required the destruction of Keynesian ideas. However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM), which is what I spent a lot of time in the 1990s doing.

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions. How serious this problem was, relative to the alternative of being theoretically consistent but empirically wide of the mark, was seldom asked.   

So why does this matter? For those who are critical of the total dominance of current macro microfoundations methodology, it is important to understand its appeal. I do not think this comes from macroeconomics being dominated by a ‘self-perpetuating clique that cared very little about evidence and regarded the assumption of perfect rationality as sacrosanct’, although I do think that the ideological preoccupations of many New Classical economists has an impact on what is regarded as de rigueur in model building even today. Nor do I think most macroeconomists are ‘seduced by the vision of a perfect, frictionless market system.’ As with economics more generally, the game is to explore imperfections rather than ignore them. The more critical question is whether the starting point of a ‘frictionless’ world constrains realistic model building in practice.

If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous! 

Noah Smith, who does believe stagflation was important in the NCCR, says at the end of his post: “this raises the question of how the 2008 crisis and Great Recession are going to affect the field”. However, if you think as I do that stagflation was not critical to the success of the NCCR, the question you might ask instead is whether there is anything in the Great Recession that challenges the methodology established by that revolution. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.  

Is there a chance of the older methodology making a comeback? I suspect the place to look is not in academia but in central banks. John Cochrane says that after the New Classical revolution there was a split, with the old style way of doing things surviving among policymakers. I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks. At the Bank of England, their main model used to be a SEM, was replaced by a hybrid DSGE/SEM, and was replaced in turn by a DSGE model. The Fed operates both a DSGE model and a more old-fashioned SEM. It is in central banks that the limitations of DSGE analysis may be felt most acutely, as I suggested here. But central bank economists are trained by academics. Perhaps those that are seduced are bound to remain smitten.


7 comments:

  1. "The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?"

    I don't have a comment on the main body of the post, but I think your gloss of the rise of Keynesianism follows more from what Keynes claimed than what actually occurred. I was just reading Roger Backhouse's excellent one-volume history of economics ("The Ordinary Business of Life") and he makes just this argument, as do many other historians of economics. Keynes built on the Wicksellian tradition as well as a 50 year old Cambridge monetary tradition. Many of the policy ideas he proposed were already in practice, supported by intuitions that his theories may have formalized, but certainly did not revolutionize. Put another way, Keynes explicitly constructed a "classical" tradition to revolutionize, but that tradition is not what many of his colleagues were actually doing as of 1935.

    I think what Keynes actually did - and perhaps the New Classicals as well, I know much less about them - is closer to Kuhn's model of revolutions, where the new theory does not simply provide better answers to old questions, but actually changes the terms of the debate entirely such that the new and the old are not entirely commensurable. Keynes - and especially his interpreters like Hicks - provided the formalism that future research would have to contend with, but it did this by deliberately misconstruing the recent past of economic theory.

    ReplyDelete
    Replies
    1. This is one of those cases where (self imposed) length constraints got in the way. The additional point I would have made was that there were methodological innovations that greatly helped Keynesian economics, like national accounts data and econometrics. Keynes did contribute to the emergence of the former.

      Delete
    2. "The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory."

      Dan beat me to it. I can't remember coming across any pre-Keynesian economist who fits Keynes' description of a "classical" economist. (RBC theorists come closest, but they come 50 years too late). In many ways the monetarist counter-revolution and New Keynesian continuation was a successful attempt to link up with that older tradition. Wicksell didn't get re-integrated until Woodford.

      Delete
  2. It's still not difficult to imagine a high level of unemployment causing an incoming political party to nationalise the central bank and indulge in non-mainstream economic policy, as happened in the UK in 1945.

    At the rate things are going, it may be the only way for the stimulati (sic) to get to try their policies out.

    ReplyDelete
  3. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.

    I'm pretty sure Lakatos wouldn't have used the presence or absence of a "burgeoning literature" as an indicator of whether or not a research program was degenerate!!

    There is a lot of literature about financial frictions, but it is so far all really, really disappointing. These papers typically take the form of:

    1) clever little reference to something in game theory, usually signalling
    2) surprisingly crude proof that the result in 1) could, in some situation, lead to self-reinforcing behaviour in asset prices
    3) massive great clunking deus ex machina comes in to say "But Fundamental Value Is X (or sometimes theta) And So Then There Will Be A Crash"
    4) straight on to some sort of credit rationing model of the sort that's been available since the 80s

    What makes me worry about the health of the research program is exactly that there are so many "financial frictions" models, all equally good (all equally "meh") and with no sign of a criterion by which one might judge which was more relevant to the real world. The problem is that, IMO, the only such criterion we are going to get is going to involve a load of detailed and specific institutional knowledge(*) and that it is this institutional detail and specificity that's missing, not anything that can be bolted on from a rational expectations/microfoundation paradigm.

    (*) By "specific institutional knowledge", I mean that the guys whose model involves an aggregate equity/asset ratio for the banking system are at least starting down the right path, but miles and miles away from where they need to be.

    ReplyDelete
  4. "I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks."

    Standard in terms of what? Mike Carney has said that "no theorist can capture the complexities of central banking".

    When it comes to making to a decision, especially in reacting to a crisis, they are not going to rely on a DSGE or any other model. Full stop. To do so would be insane. They will go by historical case studies and other experience, make the best judgement possible at the time which will necessarily involve a lot of subjectivity.

    The models, if they are used at all, will be to back up their view. If they don't, they will be made to do so. The model follows the decision. The decision is not driven by the model.

    ReplyDelete
  5. A fascinating post, developing some very interesting themes from some of your other posts, but one thing puzzles me-

    "there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable"

    Let's say that monetarism proper* was basically IS/LM Keynesianism + Friedmanite demand function for money + accelerationist Phillips Curve + permanent income hypothesis, in order of importance to Friedman's macroeconomic framework. That seems to be the consensus summary of what Friedman actually put forward when he set out a formal model in the early 1970s.

    The demand function for money was important primarily to motivate a k-percent rule and to a lesser extent to dismiss the problems of classical liquidity traps by extending the Keynesian interest theory to include more than just money & bonds. Stagflation wasn't a problem for this hypothesis, though there were separate problems with "missing money" in the same period e.g. the "missing M3" of 1976 onwards in the USA.

    The permanent income hypothesis was important insofar as it reduced the appeal of activist fiscal policy. Obviously stagflation wasn't a problem for the PIH.

    That leaves us with the accelerationist PC. Although this only came relatively late in the life of the 1960s, you can argue that it was central to linking monetarism with the quantity theory of money. If one agrees with Patinkin that the key proposition of the QTM was the long-run neutrality of money, then monetarism pre-1967 wasn't particularly QTM-orientated. After all, the key thesis of the "Monetary History of the United States" was the SHORT-run NON-neutrality of money and the alleged central role of changes in the money stock in driving contractions like the Great Depression. The accelerationist PC gave a rationale to the long-run neutrality claim by explaining why a change in the money supply wouldn't permanently affect real output given that it would uncontroversially do so in the short-run, and thus linked monetarism firmly with the QTM tradition. And as you correctly note, IS-LM Keynesianism + an accelerationist PC deals with stagflation perfectly well.

    So I really think you had it right in a previous post: it was methodology all the way down. Far from methodology following the paradigm (in a Kuhnian way) the microfoundations methodology drove the paradigm-shift in favour of New Classical macroeconomics and other microfoundational theories, including New Keynesianism. Adopting IS-LM + an accelerationist PC was an empirically adequate shift for Keynesians, and monetarism was an empirically adequate approach until the breakdown of money demand functions in the 1980s. (Even then, in the UK M0 was a pretty good guide until it was discontinued nearly 10 years ago, but given the problems with other aggregates it would have been a very bad idea to adopt a monetarist policy based on it.) The macroeconomics revolutions since the 1970s have not been motivated by empirical concerns with Keynesianism + a big healthy dose of Friedman (or monetarism with a big healthy extraction of Friedman, depending on how you look at it!).

    Also, I think that you're right to focus on methodology rather then ideology. Friedman was just as out of place in the new macroeconomic world as Tobin or Minsky. The issue was not markets vs. the state but rather what would constitute an adequate macroeconomic model. If there was an "ideology" (an awful intellectually-lazy phrase that we could do without) at work in Lucas et al, it was a philosophical preference for formal rigour and mathematical idealization rather than a political theory.

    * I.e. not including predecessors to monetarism like Hume, Ricardo, and Fisher.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.