Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label Lucas. Show all posts
Showing posts with label Lucas. Show all posts

Thursday, 27 August 2015

The day macroeconomics changed

It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair (HT Mark Thoma), who was at the conference, it is the day that macroeconomics started to go wrong.

Ray Fair is a bit of a hero of mine. When I left the National Institute to become a formal academic, I had the goal (with the essential help of two excellent and courageous colleagues) of constructing a new econometric model of the UK economy, which would incorporate the latest theory: in essence, it would be New Keynesian, but with additional features like allowing variable credit conditions to influence consumption. Unlike a DSGE it would as far as possible involve econometric estimation. I had previously worked with the Treasury’s model, and then set up what is now NIGEM at the National Institute by adapting a global model used by the Treasury, and finally I had been in charge of developing the Institute’s domestic model. But creating a new model from scratch within two years was something else, and although the academics on the ESRC board gave me the money to do it, I could sense that some of them thought it could not be done. In believing (correctly) that it could, Ray Fair was one of the people who inspired me.

I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I wantmore bells and whistles on DSGE models”. I also disagree about rational expectations: the UK model I built had rational expectations in all the key relationships.

Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.

However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. I have talked before about the work that Carroll and Muellbauer have done which shows that you have to think about credit conditions if you want to explain the pre-crisis time series for UK or US consumption. DSGE models could avoid this problem, but more traditional structural econometric (aka CC) models would find it harder to do so. So perhaps if academic macro had given greater priority to explaining these time series, it would have been better prepared for understanding the impact of the financial crisis.

What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.


Friday, 11 July 2014

Rereading Lucas and Sargent 1979

Mainly for macroeconomists and those interested in macroeconomic thought

Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.

What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.

In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation:

“Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale.”

There is no attempt to link this stagflation failure to the identification problems discussed earlier. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within SEMs. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.

In the schema due to Lakatos, a failing mainstream theory may still be able to explain previously anomalous results, but only in such a contrived way that it makes the programme degenerate. Yet, as Jesse Zinn argues in this paper, the changes to the Phillips curve suggested by Friedman and Phelps appear progressive rather than degenerate. True, this innovation came from thinking about microeconomic theory, but innovations in SEMs had always come from a mixture of microeconomic theory and evidence. 

This is why LS go on to say: “We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework.” The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able to explain the particular problem of stagflation than SEMs.

In their conclusion, LS summarise their argument. They say:

“First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.”

Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for SEMs, rather than the failure to predict or explain stagflation. As I will argue in a subsequent post, it was this methodological critique, rather than any superior empirical ability, that led to the success of this manifesto.



Sunday, 16 December 2012

Mistaking models for reality

In a recent post, Paul Krugman used a well known Tobin quote: it takes a lot of Harberger triangles to fill an Okun gap. For non-economists, this means that the social welfare costs of resource misallocations because prices are ‘wrong’ (because of monopoly, taxation etc) are small compared to the costs of recessions. Stephen Williamson takes issue with this idea. His argument can be roughly summarised as follows:

1) Keynesian recessions arise because prices are sticky, and therefore 'wrong', so their costs are not fundamentally different from resource misallocation costs.

2) Models of price stickiness exaggerate these costs, because their microfoundations are dubious.

3) If the welfare costs of price stickiness were significant, why are they not arbitraged away?

I’ve heard these arguments, or variations on them, many times before.[1] So lets see why they are mistaken, taking the points in roughly reverse order.

Keynesian recessions arise because of deficient demand. If you want to think of this as being because some price is wrong, in my view that price is the real interest rate. Now flexible wages and prices might get you the right real interest rate, either because they encourage monetary policy to do the right thing (by changing inflation), or because a particular monetary policy combines with inflation expectations to generate the appropriate real interest rate. However when nominal interest rates hit zero and there are inflation targets, flexible prices may not be enough (as argued here), so there may be no flexible price solution that gets rid of the costs of recession. At the very least, that suggests that recessions are a bit different from, say, the costs of monopoly or distortionary taxation. It also tells you why they cannot be arbitraged away by the actions of individuals.(See also Nick Rowe on this.)

What we have in a recession is a coordination problem. If everyone were to spend more, the additional output would generate incomes that matched the spending. If monetary policy cannot induce that coordination, then individuals could try and persuade someone with a great deal of spending power who could borrow freely and very cheaply to embark on additional spending. The obvious someone is the government, and the real puzzle is why governments have been so reluctant to arbitrage away recessions in this way.  

The second point is horribly wrong, and it explains the title of this post. The problem with modelling price rigidity is that there are too many plausible reasons for this rigidity - too many microfoundations. (Alan Blinder’s work is a classic reference here.) Microfounded models typically choose one for tractability. It is generally possible to pick holes in any particular tractable story behind price rigidity (like Calvo contracts). But it does not follow that these models of Keynesian business cycles exaggerate the size of recessions. It seems much more plausible to argue completely the opposite: because microfounded models typically only look at one source of nominal rigidity, they underestimate its extent and costs.

I could make the same point in a slightly different way. Lets suppose that we do not fully understand what causes recessions. What we do understand, in the simple models we use, accounts for small recessions, but not large ones. Therefore, large recessions cannot exist. The logic is obviously faulty, but too many economists argue this way. There appears to be a danger in only ‘modelling what we understand’ that modellers can go on to confuse models with reality.

Lets move from wage and price stickiness to the major cost of recessions: unemployment. The way that this is modelled in most New Keynesian set-ups based on representative agents is that workers cannot supply as many hours as they want. In that case, workers suffer the cost of lower incomes, but at least they get the benefit of more leisure. Here is a triangle maybe (see Nick Rowe again.) Now this grossly underestimates the cost of recessions. One reason is  heterogeneity: many workers carry on working the same number of hours in a recession, but some become unemployed. Standard consumer theory tells us this generates larger aggregate costs, and with more complicated models this can be quantified. However the more important reason, which follows from heterogeneity, is that the long term unemployed typically do not think that at least they have more leisure time, so they are not so badly off. Instead they feel rejected, inadequate, despairing, and it scars them for life. Now that may not be in the microfounded models, but that does not make these feelings disappear, and certainly does not mean they should be ignored.

It is for this reason that I have always had mixed feelings about representative agent models that measure the costs of recessions and inflation in terms of the agent’s utility.[2] In terms of modelling it has allowed business cycle costs to be measured using the same metric as the costs of distortionary taxation and under/over provision of public goods, which has been great for examining issues involving fiscal policy, for example. Much of my own research over the last decade has used this device. But it does ignore the more important reasons why we should care about recessions. Which is perhaps OK, as long as we remember this. The moment we actually think we are capturing the costs of recessions using our models in this way, we once again confuse models with reality.




[1] A classic example comes from Robert Lucas. This includes the rather unfortunate statement that the “central problem of depression prevention has been solved”, but I don’t think that should be used as evidence against the more substantive claim of the paper, which is that the gains from stabilising the business cycle are relatively small. This assertion has been criticised even if we stick with New Keynesian representative agent models (see this paper by Canzoneri, Cumby and Diba), but the problems I outline below are more fundamental.

[2] For non-economists: twenty years ago most Keynesian analysis measured the success of policy (social welfare) by how well it stabilised inflation and the output gap, and the relative importance of inflation compared to output was a ‘choice for policy makers’. Since work by Michael Woodford, a similar measure of social welfare can be derived from the utility of individual agents, often using pages of maths, but the importance of output compared to inflation is then a function of this utility and the model’s structure and parameters.