Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

22 comments:

  1. Whatever happened to the Welfare Economics of the 50's?

    ReplyDelete
    Replies
    1. I too am interested in welfare capitalism/activation as a result of the change in state econometric models in the 70s/80s. What I suspect is a new brake on change (from DSGE) from any previous period in history is the financial market is an significant new actor. PM/minister of finance are now dependent on sentiment...and sentiment hates change.

      Delete
  2. Whatever happened to the Welfare Economics of the 50's?

    ReplyDelete
  3. have you seen this paper by Ray Fair?

    http://fairmodel.econ.yale.edu/rayfair/pdf/2015A.htm

    would be interested in your reaction.

    I know nothing about him - one other macroeconomist I mentioned this paper to suggested that reading it would be a waste of time

    ReplyDelete
    Replies
    1. I think the paper is worth reading. Fair is one of the few academics who continued to develop large macro models. He tried to address the criticism rather than abandoning the approach. I would also point out the central banks and other real-world analysts never abandoned the approach either. This paper is worth considering.

      Delete
  4. "It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979."

    At some point you are going to have to let the flat-earth foundation go altogether, whatever that means for internal consistency. Science arrives at a theory by first exhausting observable (eg. historical) fact. You do not look at data and say "this theory explains it" (it most probably will, but is it the right explanation?). Or worse still, you do not ask "how can we fit this into the standard model".

    Theories and models are reference points, and in most cases at least a few credible ones need to be looked at. They are to be referred to after our own independent investigations have sufficiently considered all available evidence.

    ReplyDelete
  5. "You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR,"

    What is your view of VAR based analysis? If articles are submitted to journals that essentially draw their conclusion from a VAR analysis, should they be recommended?

    ReplyDelete
  6. "As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve."

    What if they rubbished rational expectations because they thought it was rubbish? Is that not sufficient grounds? And perhaps both rational expectations and the natural rate are questionable concepts that should be...questioned?

    ReplyDelete
  7. Very nice piece. I agree with almost everything. I think something else played into this as well. The computational sophistication of the Lucas/Prescott line of research was too much for young academics to resist. Even today, grad students take great pride in boasting that they're using 60 cores to solve their models. It was easy to publish and so much more interesting than the old stuff.

    ReplyDelete
    Replies
    1. "Even today, grad students take great pride in boasting that they're using 60 cores to solve their models."

      What would be nice is if they can boast that their models will work to solving problems with long term youth unemployment , extreme poverty in Africa or help implement macro-policy that avoids financial crises. But those are not the kind of things the likes of Sargent or many grad students and academic economists can boast about or are actually particularly interested in. Rational expectations models will not help us with these problems. That should be very obvious by now, if it wasn't at the beginning. You have to look at what the actual causes of these problems are. You should be using the knowledge of historians, sociologists, anthropologists and others whose work involves going out on the field who can help tell you what they are. Rational Expectations models and the huge amounts of time people spend messing around with them distracts us, not helps us.

      Delete
  8. Simon,

    Firstly, I personally feel that Romer has muddied the waters in raising the issues he has around the saltwater/freshwater divide. Blaming Solow for Lucas and Sargent going off the rails is ludicrous. Secondly, Romer is covering up his own ideological predilections by protecting Lucas and Sargent. All under the cover of mathiness and scientificism, ironically.

    You have written in one of your posts (linked above) that Lucas' and Sargent's "After Keynesian Economics" paper is a classic. In my mind it is a classic for negative reasons. It is replete with a brutal and rancorous hubris which I find completely distasteful. RE was in a way a refreshing response to the failures of understanding the Philips Curve phenomenon. I believe one of the main reasons RE took hold they way it did was because economists were stunned by the developments of the 1970s. However, RE was too estranged from reality to be of any lasting use. Then to top it off, L & S plastered it on to silly unrealistic micro based models and pushed into every corner of analysis they could dream up to deal with weaknesses in their models. Prior to the stagflation of the 1970s, economics focussed on demand side analysis. The burst of inflation that rattled economic theorists and modellers brought supply side analysis to the fore.

    It seems every economic cataclysm deals a death blow to the orthodoxy of the day. The Great Depression saw the demise of classical thinking and the rise of Keynesianism. The stagflation of the 1970s saw the demise of the Neoclassical Synthesis. The events of 2008/2009 have equally forced the re-evaluation of orthodox economics and perhaps will see the demise of RE.

    Henry

    ReplyDelete
  9. Sensible post.

    IIRC the late 70's, it initially seemed to me possible that someone would eventually build a more realistic version of Lucas 72, but it didn't happen, and seemed less possible the more I thought about it. So the proto New Keynesian models looked a better bet, at least for the time being. (And when I first read Kydland and Prescott on RBC I thought it was nuts.)

    Internal consistency is one advantage of DSGE. A second advantage is you can talk about agents' welfare under alternative policies, which you couldn't really do in the old keynesian econometric macromodels.

    ReplyDelete
  10. Why is it that rise in consumption can't be traced to the fall in the price of investment goods and associated fall in real interest rates?

    Correct me if I'm wrong but my understanding is that from the end of the 80's the relative price of investment goods has been falling, the implication is you need more investment to absorb a given amount of saving. The result is lower real rates that both encourage more investment and discourage saving.

    If I understand correctly the saving rate in both the US and UK has fallen over this time period, as have their real interest rates. All seems quite consistent.

    ReplyDelete
  11. Thank you for the information you are providing to us is awesome and really it seams worthy to me.

    ReplyDelete
  12. Simon, what is your view on Steve Keen's Minsky and related models?

    ReplyDelete
  13. RE is interesting.

    In New Zealand, RBNZ has been resolutely forecasting a return to 2% (CPI) inflation, 18 mths out, for about six years.

    In fact, it recently increased its headline interest rate four times in twelve months expecting that to occur. Currently we appear to be at 0% but still, RBNZ predicts 2%, 18 mths out.

    We, the general public, know that the target for the RBNZ is 2%, and we expect RBNZ to achieve it, so we predict 2% inflation, 18 mths out, as well (maybe its dropped just a little lately, by 0.1 or 0.2, reflecting lower confidence in RBNZ).

    I think the general public is rational, in this case, at least, so where does that place the RBNZ?

    And if both parties are rational even given this potted history, where does that leave RE?

    ReplyDelete
  14. Potemkonomics
    Comment on ‘Reform and Revolution in Macroeconomics’

    Your summary of what Lucas said or meant about Keynesianism and what Keynesians commented on rational expectations reminds one of the definition of scholarship: “Yet a good deal of what is published is, at best, trivial stuff, putting me in mind of that observation: ‘Rubbish is rubbish, but the history of rubbish is scholarship.’” (Haack, 1996, p. 301)

    The history of rubbish clearly shows that the representative economist does not understand the basics of scientific methodology.

    “Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned.” (Klant, 1994, p. 31)

    So, one always needs BOTH: formal AND material consistency. Economics as ‘inexact and separate science’ (Hausman, 1992) is content with one.

    “The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.” (See intro)

    Note well: an inconsistent model underlies policy guidance. And this is exactly why a genuine scientist like Feynman characterized the whole proto-scientific exercise as farce.

    The actual problem of economics is not that this or that model is insufficient. The problem is that both the New Classical and the New Keynesian approach is a failure. Reform is not an option.

    The scientific revolution still stands where Keynes left it. “The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight -- as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics.” (Keynes, 1973, p. 16)

    This was the agenda in 1936 and this is the agenda today. New Classicals and New Keynesians have only prolonged the detour.

    Egmont Kakarot-Handtke

    References
    Haack, S. (1996). Preposterism and Its Consequences. In E. F. Paul, F. D. Miller,
    and J. Paul (Eds.), Scientific Innovation, Philosophy, and Public Policy, pages
    296–315. Cambridge, New York, NY: University of Cambridge.
    Hausman, D. M. (1992). The Inexact and Separate Science of Economics. Cambridge:
    Cambridge University Press.
    Keynes, J. M. (1973). The General Theory of Employment Interest and Money.
    The Collected Writings of John Maynard Keynes Vol. VII. London, Basingstoke:
    Macmillan. (1936).
    Klant, J. J. (1994). The Nature of Economic Thought. Aldershot, Brookfield, VT:
    Edward Elgar.

    ReplyDelete
  15. Even economist's assumptions on the nature of money are wrong. Money is neither a store nor measure of value. It is a store and measure of demand, demand on the things and services of real value that an economy produces. Without those things and services, money *has* no value.

    Economists don't even know how to properly calculate their *own* contribution to an economy. The application of economic theory to the value of the production of an economy is not additive, it is multiplicative.

    Economics is more of a mess than economists may have the courage to acknowledge, which would be too bad for everybody.

    See the relevant posts at my blog.

    ReplyDelete
  16. What about Wynne Godley, who continued trying to build stock-flow consistent macro models, with no RatEx, and sectors rather than representative agents, right through the mid 2000s? http://estes.levy.org/pubs/wp_494.pdf

    ReplyDelete
  17. Simon,
    I'm just curious: what's your take on recent academic work from top DSGE researchers on hybrid models that combine Bayesian VAR's or dynamic factor models with DSGE inspired priors, then let the data decide on an optimal tightness of the priors? Also, by the time you add all the extra shock processes to operational DSGE models, don't you get essentially the same flexibility as with simultaneous equations models (e.g the bank of england forecast methodology)? Though I would agree that more academics should work on formalising and improving these hybrid methods.

    ReplyDelete
    Replies
    1. ICYMI

      DSGE is beyond repair. The problem goes much deeper. As a matter of fact, the representative economist cannot tell the difference between profit and income, which, clearly, is the precondition for understanding how the market economy works.

      There is neither hope for New Classicals, New Keynesians, Post Keynesianism or MMT in general, or Godley/Lavoie and Wren-Lewis in particular.

      For details of the bigger picture see the cross-references
      http://axecorg.blogspot.com/2015/03/profit-cross-references.html

      Delete
  18. I think the recent debate over “what went wrong with macro” misses an important point. While the academic economic profession may have largely walked away from large models, not everyone did. First, I would note that Ray Fair at Yale continued to pursue the modeling strategy laid out by the Cowles Commission. His strategy was driven by the view that practical modeling of the economy had to be tied to both the data and theoretical foundations. Ray talks about that approach in link below.

    http://fairmodel.econ.yale.edu/rayfair/blog1.htm

    I would also point out that the practical world never gave up on this approach. The Federal Reserve and other central banks have continued to use large macro models. In addition, economists working in the private sector have continued to pursue this modeling strategy.

    I would note that those using this modeling strategy have taken account of the the “Lucas Critiques” by trying to take account of “micro foundations” in how they have specified such models. Reasonable people can disagree on whether or not the Lucas critique has been addressed adequately in these efforts.

    But I don’t think its accurate to suggest that large models were wholly abandoned, even by the academic economics profession.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.