Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label Sargent. Show all posts
Showing posts with label Sargent. Show all posts

Thursday, 27 August 2015

The day macroeconomics changed

It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair (HT Mark Thoma), who was at the conference, it is the day that macroeconomics started to go wrong.

Ray Fair is a bit of a hero of mine. When I left the National Institute to become a formal academic, I had the goal (with the essential help of two excellent and courageous colleagues) of constructing a new econometric model of the UK economy, which would incorporate the latest theory: in essence, it would be New Keynesian, but with additional features like allowing variable credit conditions to influence consumption. Unlike a DSGE it would as far as possible involve econometric estimation. I had previously worked with the Treasury’s model, and then set up what is now NIGEM at the National Institute by adapting a global model used by the Treasury, and finally I had been in charge of developing the Institute’s domestic model. But creating a new model from scratch within two years was something else, and although the academics on the ESRC board gave me the money to do it, I could sense that some of them thought it could not be done. In believing (correctly) that it could, Ray Fair was one of the people who inspired me.

I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I wantmore bells and whistles on DSGE models”. I also disagree about rational expectations: the UK model I built had rational expectations in all the key relationships.

Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.

However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. I have talked before about the work that Carroll and Muellbauer have done which shows that you have to think about credit conditions if you want to explain the pre-crisis time series for UK or US consumption. DSGE models could avoid this problem, but more traditional structural econometric (aka CC) models would find it harder to do so. So perhaps if academic macro had given greater priority to explaining these time series, it would have been better prepared for understanding the impact of the financial crisis.

What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.


Friday, 11 July 2014

Rereading Lucas and Sargent 1979

Mainly for macroeconomists and those interested in macroeconomic thought

Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.

What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.

In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation:

“Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale.”

There is no attempt to link this stagflation failure to the identification problems discussed earlier. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within SEMs. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.

In the schema due to Lakatos, a failing mainstream theory may still be able to explain previously anomalous results, but only in such a contrived way that it makes the programme degenerate. Yet, as Jesse Zinn argues in this paper, the changes to the Phillips curve suggested by Friedman and Phelps appear progressive rather than degenerate. True, this innovation came from thinking about microeconomic theory, but innovations in SEMs had always come from a mixture of microeconomic theory and evidence. 

This is why LS go on to say: “We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework.” The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able to explain the particular problem of stagflation than SEMs.

In their conclusion, LS summarise their argument. They say:

“First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.”

Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for SEMs, rather than the failure to predict or explain stagflation. As I will argue in a subsequent post, it was this methodological critique, rather than any superior empirical ability, that led to the success of this manifesto.