Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label DSGE. Show all posts
Showing posts with label DSGE. Show all posts

Friday, 12 August 2016

Blanchard on DSGE

Olivier Blanchard, former director of the IMF’s research department, has written a short critical piece about DSGE models. Forget all the econblog reaction that essentially says he has been too kind: DSGE completely dominates academic macroeconomics, and there is no way that all these academics are going to suddenly decide this research programme is a waste of time. (I happen to think Blanchard is right that it isn’t a waste of time.) What is at issue is not the existence of DSGE models, but their hegemony.

One of Blanchard’s recommendations is that DSGE “has to become less imperialistic. Or, perhaps more fairly, the profession (and again, this is a note to the editors of the major journals) must realize that different model types are needed for different tasks.” The most important part of that sentence is the bit in brackets. He talks about a distinction between fully microfounded models and ‘policy models’. The latter used to be called Structural Econometric Models (SEMs), and they are the type of model that Lucas and Sargent famously attacked.

These SEMs have survived as the core model used in many important policy institutions (except for the Bank of England) for good reason, but DSGE trained academics have followed Lucas and Sargent as viewing these as not ‘proper macroeconomics’. Their reasoning is simply wrong, as I discuss here. As Blanchard notes, it is the editors of top journals that need to realise this, and stop insisting that all aggregate models have to be microfounded. The moment they allow space for eclecticism, then academics will be able to choose which methods they use.

Blanchard has one other ‘note for editors’ remark, and it also gets to the heart of the problem with today’s macroeconomics. He writes “Not every discussion of a new mechanism should be required to come with a complete general equilibrium closure.” The example he discusses, and which I have also used in this context, is consumption. DSGE modellers have of course often departed from the simple Euler equation, but I suspect the ways they have done this (rule of thumb consumers, habits) reflect analytical convenience rather than realism.

What sometimes seems to be missing in macro nowadays is a connection between people working on partial equilibrium analysis (like consumption) and general equilibrium modellers. Top journal editors’ preference for the latter means that the former is less highly valued. In my view this has already had important costs. I argue that the failure to take seriously the strong evidence about the importance of changes in credit availability for consumption played an important part in the inability of macroeconomics to adequately model the response to the financial crisis (for more discussion see here and here). Even if you do not accept that, the failure of most DSGE models to include any kind of precautionary saving behaviour does not seem right when DSGE has a monopoly in ‘proper modelling’. [1]

Criticism of the DSGE hegemony from those outside economics, from macroeconomists who are not part of it, or even from economic policymakers has had little impact on those all important journal editors up until now. Perhaps similar comments from one of the best macroeconomists in the world might.

[1] I discuss the reasons why this may have occurred in relation to Chris Carroll’s work here.

Thursday, 27 August 2015

The day macroeconomics changed

It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair (HT Mark Thoma), who was at the conference, it is the day that macroeconomics started to go wrong.

Ray Fair is a bit of a hero of mine. When I left the National Institute to become a formal academic, I had the goal (with the essential help of two excellent and courageous colleagues) of constructing a new econometric model of the UK economy, which would incorporate the latest theory: in essence, it would be New Keynesian, but with additional features like allowing variable credit conditions to influence consumption. Unlike a DSGE it would as far as possible involve econometric estimation. I had previously worked with the Treasury’s model, and then set up what is now NIGEM at the National Institute by adapting a global model used by the Treasury, and finally I had been in charge of developing the Institute’s domestic model. But creating a new model from scratch within two years was something else, and although the academics on the ESRC board gave me the money to do it, I could sense that some of them thought it could not be done. In believing (correctly) that it could, Ray Fair was one of the people who inspired me.

I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I wantmore bells and whistles on DSGE models”. I also disagree about rational expectations: the UK model I built had rational expectations in all the key relationships.

Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.

However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. I have talked before about the work that Carroll and Muellbauer have done which shows that you have to think about credit conditions if you want to explain the pre-crisis time series for UK or US consumption. DSGE models could avoid this problem, but more traditional structural econometric (aka CC) models would find it harder to do so. So perhaps if academic macro had given greater priority to explaining these time series, it would have been better prepared for understanding the impact of the financial crisis.

What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.


Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Friday, 3 April 2015

Do not underestimate the power of microfoundations

Mainly for economists

Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. (Recently Stephen Williamson asked the same question, but I suspect from a different perspective!) Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”

I would guess the two questions are in fact exactly the same. The NK model is the microfounded way of doing Keynesian economics, and microfounded (DSGE) models are de rigueur in academic macro, so any mainstream academic wanting to analyse business cycle issues from a Keynesian perspective will use a variant of the NK model. Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency.

As macro 50 years ago was very different, it is an interesting methodological question to ask why things changed, even if you think the change has greatly improved how macro is done (as I do). I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However there are two problems with having such a discussion. First, economists are usually not comfortable talking about methodology. Second, it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.

So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. You will hear how the Lucas critique proved that any non-microfounded model is inadequate for doing policy analysis, rather than it simply being one aspect of a complex trade-off between internal and external consistency. In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.

I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. (And arguing that it should be there is a relevant point for economics teaching, but is pointless if you are trying to change current discourse.) That makes discussion difficult, but I’m not sure it makes it impossible.


Friday, 10 October 2014

Are DSGE models distorting policy? - a test case

The debate about the current state of academic macroeconomics continues, but it has reached a kind of equilibrium. Heterodox economists, some microeconomists and many others are actively hostile to the currently dominant macro methodology. Regardless, academic macroeconomists in the papers they write carry on using, almost exclusively, microfounded DSGE models. [1] Critics say this methodology was crucial in missing the financial crisis, but academic macroeconomists respond by highlighting all the work currently being done on financial frictions. I personally think missing the crisis was down to failings of a different kind, but that DSGE did hold back our ability to understand the impact of the crisis. However what I want to suggest here is a forward looking test.

Many of the difficult choices in conducting monetary (and sometimes fiscal) policy involve trade-offs between inflation and unemployment. We saw this in the UK particularly after the crisis, with inflation going well above target during the depth of the recession. What you do in those circumstances depends critically on the costs of excess inflation compared to the costs of higher unemployment. Is 1% higher unemployment worth more or less than 1% higher inflation to society as a whole?

What do New Keynesian DSGE models say about this trade-off? They do not normally model unemployment, but they do model the output gap, which we can relate to unemployment. Their answer is that inflation is much the more important variable, by a factor of ten or more. One reason they do this is that they implicitly assume the unemployed enjoy all the extra leisure time at their disposal. I have discussed other reasons here.

Empirical evidence, and frankly common sense, suggests this is the wrong answer. Thanks to the emergence of a literature that looks at empirical measures of wellbeing, we now have clear evidence that unemployment matters more than inflation. Sometimes, as in this study by Blanchflower et al, it matters much more. Another recent study by economists at the CEP shows that “life satisfaction of individuals is between two and eight times more sensitive to periods when the economy is shrinking than at times of growth”, which as well as being related to the unemployment/inflation trade-off raises additional issues around asymmetry.

So the DSGE models appear to be dead wrong. Furthermore the reasons why they are wrong are not deeply mysterious, and certainly not mysterious enough to make us question the evidence. For example prolonged spells of unemployment have well documented scarring effects (in part because employers cannot tell if unemployment was the result of bad luck or bad performance), which may even affect the children of the unemployed. So it is not as if economists cannot understand the empirical evidence.

Does that mean that the DSGE models are deeply flawed? No, it means they are much too simple. Does that mean that the work behind them (deriving social welfare functions from individual utility) is a waste of time? I would again say no. I have done a little work of this kind, and I understood some things much better by doing so. Will these models ever get close to the data? I do not know, but I think we will learn more interesting and useful things in the attempt. The microfoundations methodology is, in my view, a progressive research strategy.

So academics are right to carry on working with these models. But many academic macroeconomists go further than this. They argue that only microfounded DSGE models can provide a sound basis for policy advice. If you press them they will say that maybe it is OK for policymakers to use more ‘ad hoc’ models, but there is no place for these in the academic journals. In my view this is absolutely wrong for at least two reasons.

First, models that are clearly still at the early development stage should not be used to guide policy when we can clearly do better. In this particular case we can easily do better just by using ad hoc social welfare functions on top of an existing DSGE model. (The Lucas critique does not apply, which is why I like this example.) Yes these hybrid models will be ‘internally inconsistent’, but they are clearly better! Second, to confine academics to just doing development work on prototype experimental models is stupid: academic economists can have many useful things to say starting with aggregate models (as here, for example), and this is not something that policymakers alone have the resources (or sometimes the inclination) to do. (We also know that academics will give policy advice, whatever models they use!) Analysis using these more ad hoc but realistic models should be scrutinised in high quality academic journals.

Let’s be even more concrete. Take the debate over whether we should have a higher (than 2%) inflation target (or some other kind of target), because of the risks of hitting the zero lower bound. If this debate just involves micofounded DSGE models which clearly overweight inflation relative to unemployment, then these models will be guilty of distorting policy. This is not a matter of running some variants away from microfounded parameters (as in this comprehensive analysis, for example), but adopting realistic parameters as the base case. If this is not done, then microfounded DSGE models will be guilty of distorting this policy discussion.

[1] A few elderly bloggers, who use both DSGE and more ‘ad hoc’ models and think the critics have a point, are regarded by at least some academics as simply past their sell-by date.


Monday, 14 April 2014

The Fed’s macroeconomic model

There has been some comment on the decision of the US central bank (the Fed) to publish its main econometric model in full. In terms of openness I agree with Tony Yates that this is a great move, and that the Bank of England should follow. The Bank publishes some details of its model (somewhat belatedly, as I noted here), but as Tony argues this falls some way short of what is now provided by the Fed.

However I think Noah Smith makes the most interesting point: unlike the Bank's model, the model published by the Fed is not a DSGE model. Instead, it is what is often called a Structural Econometric Model (SEM): a pretty ad hoc mixture of theory and econometric estimation that would not please either a macro theorist or a time series econometrician. As Noah notes, they use this model for forecasting and policy analysis. Noah speculates that the Fed’s move to publish a model of this kind indicates that they are perhaps less embarrassed about using a SEM than they once were. I’ve no idea if this is true, but for most academic macroeconomists it raises a puzzling question - why are they still using this type of model? If the Bank of England can use a DSGE model as their core model, why doesn’t the Fed?

I have discussed the question of what type of model a central bank should use before. In addition, I have written many posts (most recently here) advocating the advantages of augmenting DSGE models and VARs with this kind of middle way approach. For various reasons, this middle way approach will be particularly attractive to a policy making organisation like a central bank, but I also think that a SEM can play a role in academic analysis. For the moment, though, let me just focus on policy analysis by policy makers.

Consider a particular question: what is the impact of a temporary cut in income taxes? What kind of methods should an economist employ to answer this question? We could estimate reduced forms/VARs relating variables of interest (output, inflation etc) to changes in income taxes in the past. However there are serious problems with this approach. The most obvious is that the impact of past changes in taxes will depend on the reaction of monetary policy at the time, and whether monetary policy will act in a similar way today. Results will also depend on how permanent past changes in taxes were expected to be. I would not want to suggest that these issues make reduced form estimation a waste of time, but they do indicate how difficult it will be to get a good answer using this approach. Similar problems arise if we relate growth to debt, money to prices (a personal reflection here) and so on. Macro reduced form analysis relating policy variables to outcomes is very fragile.

An alternative would be for the economist to build a DSGE model, and simulate that. This has a number of advantages over the reduced form estimation approach. The nature of the experiment can be precisely controlled: the fact that the tax cut is temporary, how it is financed, what monetary policy is doing etc. But any answer is only going to be as good as the model used to obtain it. A prerequisite for a DSGE model is that all relationships have to be microfounded in an internally consistent way, and there should be nothing ad hoc in the model. In practice that can preclude including things that we suspect are important, but that we do not know exactly how to model in a microfounded manner. We model what we can microfound, not what we can see.

A specific example that is likely to be critical to the impact of a temporary income tax cut is how the consumption function treats income discounting. If future income is discounted at the rate of interest, we get Ricardian Equivalence. Yet this same theory tells us that the marginal propensity to consume (mpc) out of windfall gains in income is very small, and yet there is a great deal of evidence to suggest the mpc lies somewhere around a third or more. (Here is a post discussing one study from today’s Mark Thoma links.) DSGE models can try and capture this by assuming a proportion of ‘income constrained’ consumers, but is that all that is going on? Another explanation is that unconstrained consumers discount future labour income at a much greater rate than the rate of interest. This could be because of income uncertainty and precautionary savings, but these are difficult to microfound, so DSGE models typically ignore this.

The Fed model does not. To quote: “future labor and transfer income is discounted at a rate substantially higher than the discount rate on future income from non-human wealth, reflecting uninsurable individual income risk.” My own SEM that I built 20+ years ago, Compact, did something similar. My colleague, John Muellbauer, has persistently pursued estimating consumption functions that use an eclectic mix of data and theory, and as a result has been incorporating the impact of financial frictions in his work long before it became fashionable.

So I suspect the Fed uses a SEM rather than a DSGE model not because they are old fashioned and out of date, but because they find it more useful. (Actually this is a little more than a suspicion.) Now that does not mean that academics should be using models of this type, but it should at least give pause to those academics who continue to suggest that SEMs are a thing of the past.


Friday, 14 February 2014

Are New Keynesian DSGE models a Faustian bargain?

Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events.

Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build?

One way of looking at this (and I’ll argue at the end that it misses a key element) is to think about the individual components of models. If you do this, the Faustian bargain story looks implausible. Let’s start with the mainstream before the New Classical revolution. This was the famous post-war neoclassical synthesis popularised by Paul Samuelson, which integrated traditional Keynesian and Classical models in a common overall framework. While prices were sticky we were in a Keynesian world, but once prices had adjusted the world was Classical.

In terms of components, the RBC model is just the classical macromodel with two key additions. The first is rational expectations. The second is intertemporal optimisation by agents. (In non-jargon, it takes seriously the ability of agents to choose when they consume by saving or borrowing, rather than simply assuming they just consume a fixed proportion of their current income. This is often called the consumption smoothing model, because typically consumers smooth consumption relative to income e.g. by saving for retirement.) In both cases I do not think Keynesian economists were forced to adopt these ideas against their better judgement. Instead I think quite the opposite is true: both ideas were readily adopted because they appeared to be a distinct improvement on previous methods.

The key point here is that they were an improvement on previous practice. It does not mean that economists thought they were the final answer, or indeed that they were half adequate answers. Instead they were a better foundation to build on compared to what had gone before. I’ve argued this for rational expectations before, but I also think it is true for intertemporal consumption. I find it very difficult to think about more complex ideas, like liquidity constraints or precautionary saving, without starting with consumption smoothing.

I have talked about the real world events that convinced me of this, but here let me make the same point in a more informal way. When teaching on the Oxford masters programme, I give students a question. If they won a large sum, would they spend it over the next year, over the next few years, spend a significant proportion now but save the rest, or save nearly all. The last response is the answer given by the simple intertemporal model, but I argue that the first two responses make perfect sense if you are a credit constrained student. However I tell my audience that those who gave the first answer are not intending to do a PhD after finishing the masters, while those who gave the second are, because they are expecting the credit constraint to last longer. The serious point is that credit constrained consumers do not automatically consume all of a temporary increase in income. If the period over which income is higher is less than the period over which they expect to be constrained, they will smooth their additional consumption.

So, in terms of the components of New Keynesian models, I can see little that most modellers would love to junk if it wasn’t for those nasty New Classicals. [1] But what this ignores is methodology, and the fact that the RBC model is a microfounded Classical model. (By microfounded, I mean that every macroeconomic relationship has to be formally derived from optimisation by individual agents.) Yet here again, I doubt that most New Keynesian modellers adopted the microfoundations perspective against their better judgement. Instead I suspect most saw the power of the microfoundations approach (in analysing consumption, in particular), recognised the dangers in ad hoc theorising about dynamics (as in the traditional Phillips curve), and thought there was no contest.

The more interesting question is whether this has turned out to be a Faustian pact between macroeconomics and microfoundations ex post. To be more precise, by putting all our macroeconomic model building eggs in one microfounded basket, have we significantly slowed down the pace at which macroeconomists can say something helpful about the rapidly changing real world? That is a question I have written a lot about (e.g. here, and here) and no doubt will write more, but the key point I want to make now is this. If there was a Faustian bargain, I think we should acknowledge that most Keynesian economists agreed to it for good reasons, and that they were not forced into it by others.



[1] I must add a caveat here, although it is rather controversial. I think one sense in which RBC models have cast an annoying shadow is the idea that we must have models in which labour supply is endogenous. Often it would make things simpler if we could assume a fixed labour supply, and my own view is that for many issues we would lose little empirical relevance if we did so. Here I do think New Keynesians are too deferential to the always silly idea of trying to explain movements in unemployment as simply a labour supply choice.   

Monday, 8 October 2012

DSGE critics and future directions for macro


Microfounded macromodels, aka DSGE models, hold a dominant position in academic macro, and their influence in central banks is increasing. (The Bank of England’s core model is DSGE, but the approach has not yet quite achieved a similar dominance in the Fed or elsewhere.) At the risk of gross oversimplification, you can class the critics of this situation into two groups: the reformers and revolutionaries. The reformers (like myself) see DSGE analysis as always forming a central part of macro, but want greater diversity, with in particular more analysis using time series econometrics. The revolutionaries want to confine DSGE analysis to a much more minor role, if not the bin.

In a sense debate between these critics is a bit pointless. We are standing on the same train platform, agreed on the direction of travel, but at the moment the train shows no sign of moving. There is a danger that we spend too much time arguing about when the train should stop, and not thinking enough about how to get it going in the first place. Nevertheless I think it is worth having the debate, if only because of tactics. Those DSGE modellers who are sympathetic to reform can easily become defenders of the status quo in the face of more extreme attacks.

So let me give one argument for reform rather than revolution that I have only made implicitly before. When I studied macro and then began working as a macroeconomist, mainstream macro was divided into schools of thought. In an environment where both inflation and unemployment were high, you had monetarists saying that you just needed to control the money supply, some Keynesians arguing that we should focus on unemployment because it had nothing to do with inflation, and New Classicals saying unemployment was not even a problem. Each school had its models, and each claimed empirical backing. Econometric analysis was not strong enough to discriminate between schools. Different schools tended to talk across each other, and anyone trying to look for common ground or ultimate sources of disagreement had a hard time, and ended up writing lists. For a policymaker or student it must have seemed like a nightmare, and no wonder many chose which school to follow based on its ideological associations.

In my view microfoundations brought some order to this chaos (see this from here). Now for heterodox economists who think the microfoundation approach is fundamentally flawed, this is a problem: we are looking at alternatives through the wrong lens. But for those who think that, for at least some problems, basic micro reasoning is a good place to start, microfoundations provided a common language with which to discuss and appreciate different points of view. Note that this is not an argument for complete synthesis, but just a shared language.

As Diane Coyle noted about the conference we both recently attended, the UK’s social science funding agency (the ESRC) is considering what kind of research in macro is needed post crisis, and therefore what funding initiatives might be appropriate. Here I want to present a cautionary tale. Macro is dominated by US economists of course, but one area where the UK was strong was in the building and empirical evaluation of econometric macromodels. This reflected strength in time series economics (David Hendry, Hashem Pesaran, Andrew Harvey to name just three), but was embodied in the ESRC Macroeconomic Modelling Bureau, directed by Ken Wallis from 1983 to 1999. However with the intellectual tide moving ever more strongly in favour of calibrated DSGE models, macro papers by those involved with this area were not hitting the top journals. Partly as a result, the ESRC (which really means the academic and other macroeconomists advising the ESRC) decided to discontinue funding for the centre.[1]

I thought that was a huge mistake at the time, and that conviction has been reinforced by recent events.[2] What the Bureau did was bring modellers from policy institutions and academics together around the concrete endeavour of comparing the models used by those institutions. At the very least, modellers became aware of alternative perspectives, and models used by policymakers were subject to critique. This has now been lost.  The moral I draw from this mistake is that it is dangerous to sacrifice strengths to fashion. The UK retains strengths in time series macro: one of the strongest papers at the conference was presented by John Muelbauer, whose work on financial liberalisation and consumption I have discussed before. However the UK also has a number of economists producing strong work in the DSGE tradition, and this should also be encouraged. What the UK really lacks (and the key message from the report Diane cites) is academic macroeconomists, and the reason for that is for another post.  





[1] The Centre was co-funded by the Treasury and the Bank of England, and the absence of strong support from these institutions may also have been important in this decision. Both institutions were of course subject to the same intellectual tide, and may have had mixed feelings about being open to external critique.
[2] Unfortunately this was not the first time lack of support from the ESRC killed off a very innovative and productive macro research team. Many of the issues involved in optimal policy analysis in rational expectations models were first investigated by David Currie and Paul Levine in the 1980s, but funding support for this team was not renewed by the academics advising the ESRC. 

Thursday, 4 October 2012

Was the financial crisis the fault of DSGE modelling?


I am, like many, in awe of Bank of England director and economist Andy Haldane. However I did wince a bit at his recent Vox piece. He looks at the extent economists are to blame for the financial crisis, and he makes two interrelated claims. Having noted that central banks have traditionally been concerned with the “interplay of bank money and credit and the wider economy”, he suggests that this changed in the decade or so before the crisis. He then says

“Two developments – one academic, one policy-related – appear to have been responsible for this surprising memory loss. The first was the emergence of micro-founded dynamic stochastic general equilibrium (DGSE) models in economics. Because these models were built on real-business-cycle foundations, financial factors (asset prices, money and credit) played distinctly second fiddle, if they played a role at all. 
The second was an accompanying neglect for aggregate money and credit conditions in the construction of public policy frameworks. Inflation targeting assumed primacy as a monetary policy framework, with little role for commercial banks' balance sheets as either an end or an intermediate objective. And regulation of financial firms was in many cases taken out of the hands of central banks and delegated to separate supervisory agencies with an institution-specific, non-monetary focus.”

There is obviously some truth in this, but are these really major factors behind the financial crisis? Imagine looking at the following chart in 2005 or 2006. The increase in leverage that began in 2000 is both dramatic and unprecedented. (Much the same is true for the US.) Was this ignored because central bankers said this variable is not in their DSGE models? In my experience those involved in monetary policy look at a vast amount of information, particularly on the financial side, even though none of it appears in standard DSGE models, and even though their ultimate target might be inflation. For some reason monetary policy makers discounted the risks this explosion in leverage posed, or felt for some reason unable to warn others about it, but I very much doubt these reasons had anything to do with DSGE models.

UK Bank Leverage. Source: Bank of England Financial Stability Report June 2012
           
I say this because to place too much weight on the culpability of DSGE models and inflation targeting can lead to overreaction, and may sideline more fundamental issues. (I don’t, by the way, think Haldane himself falls into either trap: see this interview for example.) Let me take overreaction first. It is one thing to claim, as I have, that the microfoundations approach embodied in DSGE models encouraged macroeconomists to avoid modelling difficult (from that perspective) issues like the role of financial institutions in credit provision. It is quite another to suggest, as some do, that DSGE models are incapable of doing this. This second claim was false before the crisis (e.g. Bernanke, Gertler & Gilchrist, 1999), and has clearly been shown to be false by the post crisis explosion of DSGE work on financial frictions. Forming a rough consensus around a reasonably simple and tractable model of the crisis that can also assess the subsequent policy response will not happen overnight (it never does), and I suspect it will involve tricks which microfoundation purists will complain about, but I’m pretty certain it will happen.

Andy Haldane talks about the need to model the interconnections (networks) of actors and institutions in order to understand how sudden crises can emerge. This must be right, and recent work[1] that begins to do this looks very interesting. However what seems to me critical in avoiding future crises is to understand why leverage increased (and was allowed to increase) in the first place, rather than the specifics of how it unravelled. As I suggested here, we may find more revealing answers by thinking about the political economy of how banks influenced regulations and regulators, rather than by thinking about the dynamics of networks. We should also look at the incentives within banks, and why short term behaviour in the financial sector may be increasing, as Haldane himself has suggested. Investigating networks is clearly interesting, important and should be pursued, but other avenues involving perhaps more conventional economics and political economy may turn out to be at least as informative in understanding how the crisis was allowed to develop.

Postscript

I wrote this before reading this from Diane Coyle, which is well worth reading if you think all microeconomists must be in favour of DSGE. We both went to the conference she mentions, and my own rather different reactions to it partly inspired my post. I'd like to say more about this quite soon.






[1] See, for example P Gai, A Haldane and S Kapadia, Journal of Monetary Economics, Vol. 58, Issue 5, pages 453-470, 2011, and other recent work by Kapadia.

Monday, 3 September 2012

What type of model should central banks use?


As a follow up to my recent post on alternatives to microfounded models, I thought it might be useful to give an example of where I think an alternative to the DSGE approach is preferable. I’ve talked about central bank models before, but that post was partly descriptive, and raised questions rather than gave opinions. I come off the fence towards the end of this post.

As I have noted before, some central banks have followed academic macroeconomics by developing often elaborate DSGE models for use in both forecasting and policy analysis. Now we can all probably agree it is a good idea for central banks to look at a range of model types: DSGE models, VARs, and anything else in between. (See, for example, this recent advert from Ireland.) But if the models disagree, how do you judge between them? For understandable reasons, central banks like to have a ‘core’ model, which collects their best guesses about various issues. Other models can inform these guesses, but it is good to collect them all within one framework. Trivially you need to make sure your forecasts for the components of GDP are consistent with the aggregate, but more generally you want to be able to tell a story that is reasonably consistent in macroeconomic terms.

Most central banks I know use structural models as their core model, by which I mean models that contain equations that make use of much more economic theory than a structural VAR. They want to tell stories that go beyond past statistical correlations. Twenty years ago, you could describe these models as Structural Econometric Models (SEMs). These used a combination of theory and time series econometrics, where the econometrics was generally at the single equation level. However in the last few years a number of central banks, including the Bank of England, have moved towards making their core model an estimated DSGE model. (In my earlier post I described the Bank of England’s first attempt which I was involved with, BEQM, but they have since replaced this with a model without that core/periphery design, more like the conical ECB model of Smets-Wouters.)

How does an estimated DSGE model differ from a SEM? In the former, the theory should be internally consistent, and the data is not allowed to compromise that consistency. As a result, data has much less influence over the structure of individual equations. Suppose, for example, you took a consumption function from a DSGE model, and looked at its errors in predicting the data. Suppose I could show you that these errors were correlated with asset prices: when house prices went down, people saved more. I could also give you a good theoretical reason why this happened: when asset prices were high, people were able to borrow more because the value of their collateral increased. Would I be allowed to add asset prices into the consumption function of the DSGE model? No, I would not. I would instead have to incorporate the liquidity constraints that gave rise to these effects into the theoretical model, and examine what implications it had for not just consumption, but also other equations like labour supply or wages. If the theory involved the concept of precautionary saving, then as I indicated here, that is a non-trivial task. Only when that had been done could I adjust my model.

In a SEM, things could move much more quickly. You could just re-estimate the consumption function with an additional term in asset prices, and start using that. However, that consumption function might well now be inconsistent with the labour supply or wage equation. For the price of getting something nearer the data, you lose the knowledge that your model is internally consistent. (The Bank’s previous model, BEQM, tried to have it both ways by adding variables like asset prices to the periphery equation for consumption, but not to the core DSGE model.)

Now at this point many people think Lucas critique, and make a distinction between policy analysis and forecasting. I have explained elsewhere why I do not put it this way, but the dilemma I raise here still applies if you are just interested in policy analysis, and think internal consistency is just about the Lucas critique. A model can satisfy the Lucas critique (be internally consistent), and give hopeless policy advice because it is consistently wrong. A model that does not satisfy the Lucas critique can give better (albeit not perfectly robust) policy advice, because it is closer to the data.

So are central banks doing the right thing if they make their core models estimated DSGE models, rather than SEMs? Here is my argument against this development. Our macroeconomic knowledge is much richer than any DSGE model I have ever seen. When we try and forecast, or look at policy analysis, we want to use as much of that knowledge as we can, particularly if that knowledge seems critical to the current situation. With a SEM we can come quite close to doing that. We can hypothesise that people are currently saving a lot because they are trying to rebuild their assets. We can look at the data to try and see how long that process may last. All this will be rough and ready, but we can incorporate what ideas we have into the forecast, and into any policy analysis around that forecast. If something else in the forecast, or policy, changes the value of personal sector net assets, the model will then adjust our consumption forecast. This is what I mean about making reasonably consistent judgements.

With a DSGE model without precautionary saving or some other balance sheet recession type idea influencing consumption, all we see are ‘shocks’:  errors in explaining the past. We cannot put any structure on those shocks in terms of endogenous variables in the model. So we lose this ability to be reasonably consistent. We are of course completely internally consistent with our model, but because our model is an incomplete representation of the real world we are consistently wrong. We have lost the ability to do our second best.

Now I cannot prove that this argument against using estimated DSGE models as the core central bank model is right. It could be that, by adding asset prices into the consumption function – even if we are right to do so – we make larger mistakes than we would by ignoring them completely, because we have not properly thought through the theory. The data provides some check against that, but it is far from foolproof. But equally you cannot prove the opposite either. This is another one of those judgement calls.

So what do I base my judgement on? Well how about this thought experiment. It is sometime in 2005/6. Consumption is very strong, and savings are low, and asset prices are high. You have good reason to think asset prices may be following a bubble. Your DSGE model has a consumption function based on an Euler equation, in which asset prices do not appear. It says a bursting house price bubble will have minimal effect. You ask your DSGE modellers if they are sure about this, and they admit they are not, and promise to come back in three years time with a model incorporating collateral effects. Your SEM modeller has a quick look at the data, and says there does seem to be some link between house prices and consumption, and promises to adjust the model equation and redo the forecast within a week. Now choose as a policy maker which type of model you would rather rely on.               

Tuesday, 3 July 2012

Ideology and Falsification in Macroeconomics


                This is a response to two posts, one by Stephen Williamson and another two by Noah Smith. Both are linked to a Paul Krugman post, which commented on, and had the same broad message, as one of my own. See also Mark Thoma here.
                Both Paul Krugman and I argued that the reason for the apparent disputative nature of macro lay in politics and ideology. To paraphrase my own take, the antagonism to a Keynesian reading of events since to financial crisis has ideological roots, based on a distrust of government intervention. This distrust is most apparent in the austerity versus stimulus debate on fiscal policy.
                Underneath Stephen Williamson’s obvious personal dislike for what Krugman is doing, there is a serious challenge to this view.  He writes
“Modern macroeconomics has been much more concerned with science than with politics. Robert Solow, David Cass, Tjalling Koopmans, Len Mirman, and Buzz Brock were not thinking about politics when they developed the theory that Kydland and Prescott used in their early work. I don't think Kydland and Prescott had politics on their mind in 1982, nor was Mike Woodford thinking about politics when he adapted Kydland and Prescott's work to come up with New Keynesian theory.”
Details aside, I think this will strike a chord with many academic macroeconomists. They are just trying to advance the discipline, and are certainly not trying to defend some ideological viewpoint. There are lots of interesting new ideas being explored in modern macro, producing high quality work with important implications. Most importantly, this work can be appreciated by most mainstream fellow researchers. Unlike the days of old, where members of different schools of thought talked across each other, we now have a shared language as a result of the microfoundation of macro.
I agree with everything in the previous paragraph, which is perhaps where I differ from some other Keynesians.[1] But I also obviously agree with what I wrote on ideology and macro. So how can I square this circle? The first, and probably critical, point to make is that when I and most others talk about antagonistic macroeconomic debates, we are referring to debates over current macroeconomic policy rather than the details of some macroeconomic research. The second point is that New Keynesian theory builds on Real Business Cycle foundations, and is therefore in theoretical terms not an alternative to it.
So, for example, I have no problem appreciating a seminar where the presenter explores a flex price DSGE model where fluctuations are only caused by productivity shocks, but where some relevant feature of the real world (some ‘friction’) is being added to earlier models. I might learn something about how to model this new feature, and what its macroeconomic implications might be. It could be the case that these techniques and results are completely negated once you added sticky prices into the model, but normally this is not the case. However, when it comes to looking at the impact of contractionary fiscal policy on the macroeconomy today, I will select a quite different model: sticky prices are essential, because we need to work in a world where we have demand deficiency.
The current reversion of macro back into schools of thought relates to policy advice. It’s about which models we select, and which we reject, when telling governments what to do. So the interesting question that arises is how this selection process takes place. Why do I insist we need to focus on aggregate demand to understand what is happening today, while others take a different view?
In the idealised Popperian description of scientific progress, it is evidence that provides this selection process. The moment that a piece of evidence is found that contradicts a theory or model, that theory will be rejected, and a new theory will emerge that is consistent with all known evidence. What is missing in macro, says Noah Smith, is this falsification process. I think the late and great Mark Blaug would wholeheartedly agree.
Now I could at this stage talk about the limits to falsificationism, and how it particularly fails to apply to economics. But I think Noah is essentially right. For a number of reasons I’ve talked about elsewhere, the microfoundation of macro and DSGE modelling downplays the role of evidence. More specifically, it allows modellers to be selective about which evidence they focus on (the ‘puzzle’). This may be fine for writing papers, but when it comes to model selection to tackle policy problems it is weak. And this weakness lets in the ideological factors I talked about.
So that is how I can be both supportive of current academic macro, and believe that macroeconomic policy advice is contaminated by ideology. I want to add one additional thought. Noah talked about this problem as one involving broken institutions, and I found that strange at first. But let’s go back to my fictional seminar involving a model where cycles are generated by productivity shocks. Even though the model is missing what I believe causes most business cycles, I can still learn something from the seminar, so it would be quite inappropriate for me to denounce the paper and storm out in disgust. Equally, academics should be free to choose what they think are interesting avenues to explore. The problem comes when the policy maker has to choose between models. But how and where exactly should evidence be exerting a greater influence? Is the problem in the selection processes of journals? Should there be more econometric analysis of structural macro relationships in journals, or do VARs tell us all we need to know? If nothing is missing from the academic journals, should policy making institutions employ more staff to do this kind of work? If you think that ideology plays too large a role in macro policy, but refuse to believe that this must always be so, these are interesting questions.        
               
               



[1] This is a different question about whether modern macro has advanced our understanding of the current crisis. Here I agree that old fashioned tools (be they 1970s macro – see Robert Gordon for example - or the General Theory itself) have proved their worth, but I also suspect that in time a more complete understanding will include more modern (and as yet undeveloped) elements.