Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label identification. Show all posts
Showing posts with label identification. Show all posts

Tuesday, 20 September 2016

Paul Romer on macroeconomics

It is a great irony that the microfoundations project, which was meant to make macro just another application of microeconomics, has left macroeconomics with very few friends among other economists. The latest broadside comes from Paul Romer. Yes it is unfair, and yes it is wide of the mark in places, but it will not be ignored by those outside mainstream macro. This is partly because he discusses issues on which modern macro is extremely vulnerable.

The first is its treatment of data. Paul’s discussion of identification illustrates how macroeconomics needs to use all the hard information it can get to parameterise its models. Yet microfounded models, the only models deemed acceptable in top journals for both theoretical and empirical analysis, are normally rather selective about the data they focus on. Both micro and macro evidence is either ignored because it is inconvenient, or put on a to do list for further research. This is an inevitable result of making internal consistency an admissibility criteria for publishable work.

The second vulnerability is a conservatism which also arises from this methodology. The microfoundations criteria taken in its strict form makes it intractable to model some processes: for example modelling sticky prices where actual menu costs are a deep parameter. Instead DSGE modelling uses tricks, like Calvo contracts. But who decides whether these tricks amount to acceptable microfoundations or are instead ad hoc or implausible? The answer depends a lot on conventions among macroeconomists, and like all conventions these move slowly. Again this is a problem generated by the microfoundations methodology.

Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated. (And, as a result, it completely misleads Paul Mason here.) Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places. It was only a few years ago that I listened to a seminar paper where the financial crisis was modelled as a large negative productivity shock.

Only in a discipline which has deemed microfoundations as the only acceptable way of modelling can practitioners still feel embarrassed about including sticky prices because their microfoundations (the tricks mentioned above) are problematic . Only in that discipline can respected macroeconomists argue that because of these problematic microfoundations it is best to ignore something like sticky prices when doing policy work: and argument that would be laughed out of court in any other science. In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see. Other economists understand this, but many macroeconomists still think this is all quite normal.   

Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Friday, 11 July 2014

Rereading Lucas and Sargent 1979

Mainly for macroeconomists and those interested in macroeconomic thought

Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.

What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.

In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation:

“Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale.”

There is no attempt to link this stagflation failure to the identification problems discussed earlier. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within SEMs. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.

In the schema due to Lakatos, a failing mainstream theory may still be able to explain previously anomalous results, but only in such a contrived way that it makes the programme degenerate. Yet, as Jesse Zinn argues in this paper, the changes to the Phillips curve suggested by Friedman and Phelps appear progressive rather than degenerate. True, this innovation came from thinking about microeconomic theory, but innovations in SEMs had always come from a mixture of microeconomic theory and evidence. 

This is why LS go on to say: “We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework.” The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able to explain the particular problem of stagflation than SEMs.

In their conclusion, LS summarise their argument. They say:

“First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.”

Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for SEMs, rather than the failure to predict or explain stagflation. As I will argue in a subsequent post, it was this methodological critique, rather than any superior empirical ability, that led to the success of this manifesto.