Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label DSGE. Show all posts
Showing posts with label DSGE. Show all posts

Thursday, 19 April 2018

Did macroeconomics give up on explaining recent economic history?


The debate that continues about whether a Phillips curve still exists partly reflects the situation in various countries where unemployment has fallen to levels that had previously led to rising inflation but this time wage inflation seems pretty static. In all probability this reflects two things: the existence of hidden unemployment, and that the NAIRU has fallen. See Bell and Blanchflower on both for the UK.

The idea that the NAIRU can move gradually over time leads many to argue that the Phillips curve itself becomes suspect. In this post I tried to argue this is a mistake. It is also a mistake to think that estimating the position of the NAIRU is a mugs game. It is what central banks have to do if they take a structural approach to modelling inflation (and what other reasonable approaches are there?). Which raises the question as to why analysis of how the NAIRU moves is not a more prominent part of macro.

The following account may be way off, but I want to set it down because I am not aware of seeing it outlined elsewhere. I want to start with my account of why modern macro left the financial sector out of their models before the crisis. To cut a long story short, a focus on business cycle dynamics meant that medium term shifts in the relationship between consumption and income were largely ignored. Those who did study these shifts convincingly related them to changes in financial sector behaviour. Had more attention been paid to this, we might have seen much more analysis and more understanding of finance to real linkages.

Could the same story be told about the NAIRU? As with medium term trends in consumption, there is a literature on medium term movements in the NAIRU (or structural unemployment), but it does not tend to get into the top journals. One of the reasons, as with consumption, is that such analysis tends to be what modern macroeconomists would call ad hoc: it uses lots of theoretical ideas, none of which are carefully microfounded within the same paper. That is not a choice by those who do this kind of empirical work, but a necessity.

Much the same could apply to other key macro aggregates like investment. When economists ask whether investment is currently unusually high or low, they typically draw graphs and calculate trends and averages. We should be able to do much better than that. We should instead be looking at the equation that best captures the past 30 odd years of investment data, and asking whether it currently over or under predicts. The same is true for equilibrium exchange rates.

It was not just the New Classical Counter Revolution in macro that led to this downgrading of what we might call structural time series analysis of key macro relationships. Equally responsible was Sims famous paper 1980 ‘Macroeconomics and Reality’, that attacked the type of identification restrictions used in time series analysis and which proposed instead VAR methods. This perfect storm relegated the time series analysis that had been the bread and butter of macroeconomics to the minor journals.

I do not think it is too grandiose to claim that as a consequence macroeconomics gave up on trying to explain recent macroeconomic history: what could be called the medium term behaviour of macroeconomic aggregates, or why the economy did what it did over the last 30 or 40 years. Macro focused on the details of how business cycles worked, instead of how business cycles linked together.

Leading macroeconomists involved in policy see the same gaps, but express this dissatisfaction in a different way (with the important exception of Olivier Blanchard). For example John Williams, who has just been appointed to run the New York Fed, calls here for the next generation of DSGE models to focus on three areas. First they need to have a greater focus on modelling the labour market and the degree of slack, which I think amounts to the same thing as how the NAIRU changes over time. Second, he talks about a greater focus on medium- or long- run developments to both the ‘supply’ and ‘demand’ sides of the economy. The third of course involves incorporating the financial sector.

Perhaps one day DSGE models will do all this, although I suspect the macroeconomy is so complex that there will always be important gaps in what can be microfounded. But if it does happen, it will not come anytime soon. It is time that macroeconomics revisited the decisions it made around 1980, and realise that the deficiencies with traditional time series analysis that it highlighted were not as great as future generations have subsequently imagined. Macroeconomics needs to start trying to explain recent macroeconomic history once again.



Saturday, 6 January 2018

Why the microfoundations hegemony holds back macroeconomic progress

When David Vines asked me to contribute to a OXREP (Oxford Review of Economic Policy) issue on “Rebuilding Macroeconomic Theory”, I think what he hoped I would write on how the core macro model needed to change to reflect macro developments since the crisis with a particular eye to modelling the impact of fiscal policy. That would be an interesting paper to write, but I decided fairly quickly that I wanted to say something that I thought was much more important.

In my view the biggest obstacle to the advance of macroeconomics is the hegemony of microfoundations. I wanted at least one of the papers in the collection to question this hegemony. It turned out that I was not alone, and a few papers did the same. I was particularly encouraged when Olivier Blanchard, in blog posts reflecting his thoughts before writing his contribution, was thinking along the same lines.

I will talk about the other papers when more people have had a chance to read them. Here I will focus on my own contribution. I have been pushing a similar line in blog posts for some time, and that experience suggests to me that most macroeconomists working within the hegemony have a simple mental block when they think about alternative modelling approaches. Let me see if I can break that block here.

Imagine a DSGE model, ‘estimated’ by Baynesian techniques. To be specific, suppose it contains a standard intertemporal consumption function. Now suppose someone adds a term into the model, say unemployment into the consumption function, and thereby significantly improves the fit of the model. It is not hard to think why the fit significantly improves: unemployment could be a proxy for the uncertainty of labour income, for example. The key question becomes which is the better model with which to examine macroeconomic policy: the DSGE or the augmented model?

A microfoundations macroeconomist will tend to say without doubt the original DSGE model, because only that model is known to be theoretically consistent. (They might instead say that only that model satisfies the Lucas critique, but internal consistency is the more general concept.) But an equally valid response is to say that the original DSGE model will give incorrect policy responses because it misses an important link between unemployment and consumption, and so the augmented model is preferred.

There is absolutely nothing that says that internal consistency is more important than (relative) misspecification. In my experience, when confronted with this fact, some DSGE modellers resort to two diversionary tactics. The first, which is to say that all models are misspecified, is not worthy of discussion. The second is that neither model is satisfactory, and research is needed to incorporate the unemployment effect in a consistent way.

I have no problem with that response in itself, and for that reason I have no problem with the microfoundations project as one way to do macroeconomic modelling. But in this particular context it is a dodge. There will never be, at least in my lifetime, a DSGE model that cannot be improved by adding plausible but potentially inconsistent effects like unemployment influencing consumption. Which means that, if you think models that are significantly better at fitting the data are to be preferred to the DSGE models from whence they came, then these augmented models will always beat the DSGE model as a way of modelling policy.

What this question tells you is that there is an alternative methodology for building macroeconomic models that is not inferior to the microfoundations approach. This starts with some theoretical specification, which could be a DSGE model as in the example, and then extends it in ways that are theoretically plausible and which also significantly improve the model’s fit, but which are not formally derived from micofoundations. I call that an example within the Structural Econometric Model (SEM) class, and Blanchard calls it a Policy Model.

An important point I make in my paper is that these are not competing methodologies, but instead they are complementary. SEMs as I describe them here start from microfounded theory. (Of course SEMs can also start from non-microfounded theory, but the pros and cons of that is a different debate I want to avoid here.) As a finished product they provide many research agendas for microfoundation modelling. So DSGE modelling can provide the starting point for builders of SEMs or Policy Models, and these models when completed provide a research agenda for DSGE modellers.

Once you see this complementarity, you can see why I think macroeconomics would develop much more rapidly if academics were involved in building SEMs as well as building DSGE models. The mistake the New Classical Counter Revolution made was to dismiss previous ways of modelling the economy, instead of augmenting these ways with additional approaches. Each methodology on its own will develop much more slowly than the two combined. Another way of putting it is that research based on SEMs is more efficient than the puzzle resolution approach used today. 

In the paper, I try to imagine what would have happened if the microfoundations project had just augmented the macroeconomics of the time (which was SEM modelling), rather than dismissing it out of hand. I think we have good evidence that active complementarity between SEM and microfoundations modelling would have investigated in depth links between the financial and real sectors before the financial crisis. The microfoundations hegemony chose the wrong puzzles to look at, deflecting macroeconomics from the more important empirical issues. The same thing may happen again if the microfoundations hegemony continues.



Sunday, 15 January 2017

Blanchard joins calls for Structural Econometric Models to be brought in from the cold

Mainly for economists

Ever since I started blogging I have written posts on macroeconomic methodology. One objective was to try and convince fellow macroeconomists that Structural Econometric Models (SEMs), with their ad hoc blend of theory and data fitting, were not some old fashioned dinosaur, but a perfectly viable way to do macroeconomics and macroeconomic policy. I wrote this with the experience of having built and published papers with both SEMs and DSGE models.

Olivier Blanchard’s third post on DSGE models does exactly the same thing. The only slight confusion is that he calls them ‘policy models’, but when he writes

“Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals.”

he can only mean SEMs. [1] I prefer SEMs to policy models because SEMs describe what is in the tin: structural because they utilise lots of theory, but econometric because they try and match the data.

In a tweet, Noah Smith says he is puzzled. “What else is the point of DSGEs??” besides advising policy he asks? This post tries to help him and others see how the two classes of model can work together.

The way I would estimate a SEM today (but not necessarily the only valid way) would be to start with an elaborate DSGE model. But rather than estimate this model using Bayesian methods, I would use it as a theoretical template with which to start econometric work, either on an equation by equation basis or as a set of sub-systems. Where lag structures or cross equation restrictions were clearly rejected by the data, I would change the model to more closely match the data. If some variables had strong power in explaining others but were not in the DSGE specification, but I could think of reasons for a causal relationship (i.e. why the DSGE specification was inadequate), I would include them in the model. That would become the SEM. [2]

If that sounds terribly ad hoc to you, that is right. SEMs are an eclectic mix of theory and data. But SEMs will still be useful to academics and policymakers who want to work with a model that is reasonably close to the data. What those I call DSGE purists have to admit is that because DSGE models do not match the data in many respects, they are misspecified and therefore any policy advice from them is invalid. The fact that you can be sure they satisfy the Lucas critique is not sufficient compensation for this misspecification.

By setting the relationship between a DSGE and a SEM in the way I have, it makes it clear why both types of model will continue to be used, and how SEMs can take their theoretical lead from DSGE models. SEMs are also useful for DSGE model development because their departures from DSGEs provide a whole list of potential puzzles for DSGE theorists to investigate. Maybe one day DSGE will get so good at matching the data that we no longer need SEMs, but we are a long way from that.

Will what Blanchard and I call for happen? It already does to a large extent at the Fed: as Blanchard says what is effectively their main model is a SEM. The Bank of England uses a DSGE model, and the MPC would get more useful advice from its staff if this was replaced by a SEM. The real problem is with academics, and in particular (as Blanchard again identified in an earlier post) journal editors. Of course most academics will go on using DSGE, and I have no problem with that. But the few who do instead decide to use a SEM should not be automatically shut out from the pages of the top journals. They would be at present, and I’m not confident - even with Blanchard’s intervention - that this is going to change anytime soon.


[1] What Ray Fair, longtime builder and user of his own SEM, calls Cowles Commission models.

[2] Something like this could have happened when the Bank of England built BEQM, a model I was consultant on. Instead the Bank chose a core/periphery structure which was interesting, but ultimately too complex even for the economists at the Bank.

Friday, 13 January 2017

Miles on Haldane on Economics in Crises

Anything that says economics is in crisis always gets a lot of attention, particularly after Brexit (because economists are so pessimistic about its outcome), and Andy Haldane’s public comments were no exception. But former Monetary Policy Committee colleague David Miles has hit back, saying Haldane is wrong and economics is not in crisis. David is right, but (perhaps inevitably) he slightly overstates his case.

First an obvious point that is beyond dispute. Economics is much more than macroeconomics and finance. Look at an economics department, and you will typically find less than 20% are macroeconomists, and in some departments there can be just a single macroeconomist. Those working on labour economics, experimental economics, behavioural economics, public economics, microeconomic theory and applied microeconomics, econometric theory, industrial economics and so on would not have felt their sub-discipline was remotely challenged by the financial crisis.

David Miles is also right that economists have not found it difficult to explain the basic story of the financial crisis from the tools that they already had at their disposal. Here I will tell again a story about an ESRC seminar held at the Bank of England about whether other subjects like the physical sciences could tell economists anything useful post-crisis. It was by invitation only, Andy Haldane was there throughout, and for some reason I was there and asked to give my impressions at the end. In the background document there was a picture a bit like this.
UK Bank leverage: ratio of total assets to shareholder claims. (Source Bank of England Financial Stability Report June 2012) Added by popular request 17/1/17 [3]

I made what I hope is a correct observation. Show most economists a version of this chart just before the crisis, and they would have become very concerned. Some might have had their concern reduced by assurances and stories about how new risk management techniques made the huge increase in leverage seen in the years just before the crisis perfectly safe, but I think most would not. In particular, many macroeconomists would have said what about systemic risk?

The problem before the financial crisis was that hardly anyone looked at this data. There is one institution that surely would have looked at this like this data, and that was the Bank of England. As Peter Doyle writes:

“ .. it was not “economics” that missed the GFC, but, dare I say it (and amongst some others), the Bank of England.”

If there is a discussion of the increase in bank leverage and the consequent risks to the economy in any Inflation Reports in 2006 and 2007 I missed it. I do not think we have been given a real account of why the Bank missed what was going on: who looked at the data, who discussed it etc. I think we should know, if only for history’s sake.

What I think David Miles could have said but didn’t is that macroeconomists were at fault in taking the financial sector for granted, and therefore typically not including key finance to real interactions in their models. [1] As a result, the crisis has inspired a wave of new research that tries to make up for that, but this involves using existing ideas and applying them to macroeconomic models. There has also been new work using new techniques that has tried to look at network effects, which Andy Haldane mentions here. Whether this work could be usefully applied much more widely, as he suggests, is not yet clear, and to say that until that happens there is a crisis in economics is just silly.

The failure to forecast that consumers after the Brexit vote would reduce their savings ratio is a typical kind of forecasting error. Would they have done this anyway, and if not what about the Brexit vote and its aftermath inspired it, we will probably never know for sure. This kind of mistake happens all the time in macro forecasting, which is why comparisons to weather forecasting and Michael Fish are not really apt. [2] That is what David Miles means by saying it is a non-event.

What is hardly ever said, so I make no apologies for doing so once more, is that macroeconomic theory has in some ways ‘had a good crisis’. Basic Keynesian macroeconomic theory says you don’t worry about borrowing in a recession because interest rates will not rise, and they have not. New Keynesian theory says creating loads of new money will not lead to runaway inflation and it has not. Above all else, macroeconomic theory and most evidence said that the turn to austerity in 2010 would delay or weaken the recovery and that is exactly what happened. As Paul Krugman often says, it is quite rare for macroeconomics to be so fundamentally tested, and it passed that test. We should be talking not about a phoney crisis in economics, but why policy makers today have ignored economics, and thereby lost their citizens' the equivalent of a lot of money.

[1] In the COMPACT model I built in the early 1990s, credit conditions played an important role in consumption decisions, reflecting the work of John Muellbauer. But as I set out here, proposals to continue the model and develop further financial/real linkages were rejected by economists and the ESRC because it was not a DSGE model.

[2] Weather forecasts for the next few days are more accurate than macro forecasts, although perhaps longer term forecasts are more comparable. But more fundamentally, while the weather is a highly complex system like the economy. It is made up of physical processes that are predictable in a way human behaviour will never be. As a result, I doubt that simply having more data will have much impact on the ability to forecast the economy.

[3] Total asset are the size of the bank's balance sheet. Shareholder claims are the part of those assets that belong to shareholder, and which therefore represent a cushion that can absorb losses without the bank facing bankruptcy. So at the peak of the financial crisis, banks had over 60 times as many assets as that cushion. That makes a bank very vulnerable to loss on those assets.

Tuesday, 11 October 2016

Ricardian Equivalence, benchmark models, and academics response to the financial crisis

Mainly for economists

In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:
“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified.

It is not only academics who think policy has to be done using microfounded models. The core model used by the Bank of England is a microfounded DSGE model. So even in this policy making institution, their core model does not conform to Blanchard’s prescription. (Yes, I know they have lots of other models, but still. The Fed is closer to Blanchard than the Bank.)

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about in his graduate textbook with Stan Fischer, although then they were confined to chapter 10! These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.


[1] Recognising this point, some might add some ‘rule of thumb’ consumers into the model. This is fine, as long as you do not continue to think the model is microfounded. If these rule of thumb consumers spend all their income because of credit constraints, what happens when these constraints are expected to last for more than the next period? Does the model correctly predict what would happen to consumption if the proportion of rule of thumb consumers changes? It does not.  

Saturday, 24 September 2016

What is so bad about the RBC model?

This post has its genesis in a short twitter exchange storified by Brad DeLong

DSGE models, the models that mainstream macroeconomists use to model the business cycle, are built on the foundations of the Real Business Cycle (RBC) model. We (almost) all know that the RBC project failed. So how can anything built on these foundations be acceptable? As Donald Trump might say, what is going on here?

The basic RBC model contains a production function relating output to capital (owned by individuals) and labour plus a stochastic element representing technical progress, an identity relating investment and capital, a national income identity giving output as the sum of consumption and investment, marginal productivity conditions (from profit maximisation by perfectly competitive representative firms) giving the real wage and real interest rate, and the representative consumer’s optimisation problem for consumption, labour supply and capital. (See here, for example.)

What is the really big problem with this model? Not problems along the lines of ‘I would want to add this’, but more problems like I would not even start from here. Let’s ignore capital, because in the bare bones New Keynesian model capital does not appear. If you were to say giving primacy to shocks to technical progress I would agree that is a big problem: all the behavioural equations should contain stochastic elements which can also shock this economy, but New Keynesian models do this to varying degrees. If you were to say the assumption of labour market clearing I would also agree that is a big problem.

However none of the above is the biggest problem in my view. The biggest problem is the assumption of continuous goods market clearing aka fully flexible prices. That is the assumption that tells you monetary policy has no impact on real variables. Now an RBC modeller might say in response how do you know that? Surely it makes sense to see whether a model that does assume price flexibility could generate something like business cycles?

The answer to that question is no, it does not. It does not because we know it cannot for a simple reason: unemployment in recessions is involuntary, and this model cannot generate involuntary unemployment, but only voluntary variations in labour supply as a result of short term movements in the real wage. Once you accept that higher unemployment in recessions is involuntary (and the evidence for that is very strong), the RBC project was never going to work.

So how did RBC models ever get off the ground? Because the New Classical revolution said everything we knew before that revolution should be discounted because it did not use the right methodology. And also because the right methodology - the microfoundations methodology - allowed the researcher to select what evidence (micro or macro) was admissible. That, in turn, is why the microfoundations methodology has to be central to any critique of modern macro. Why RBC modellers chose to dismiss the evidence on involuntary unemployment I will leave as an exercise for the reader.

The New Keynesian (NK) model, although it may have just added one equation to the RBC model, did something which corrected its central failure: the failure to acknowledge the pre-revolution wisdom about what causes business cycles and what you had to do to combat them. In that sense its break from its RBC heritage was profound. Is New Keynesian analysis still hampered by its RBC parentage? The answer is complex (see here), but can be summarised as no and yes. But once again, I would argue that what holds back modern macro much more is its reliance on its particular methodology.

One final point. Many people outside mainstream macro feel happy to describe DSGE modelling as a degenerative research strategy. I think that is a very difficult claim to substantiate, and is hardly going to convince mainstream macroeconomists. The claim I want to make is much weaker, and that is that there is no good reason why microfoundations modelling should be the only research strategy employed by academic economists. I challenge anyone to argue against my claim.




Friday, 16 September 2016

Economics, DSGE and Reality: a personal story

As I do not win prizes very often, I thought I would use the occasion of this one to write something much more personal than I normally allow myself. But this mini autobiography has a theme involving something quite topical: the relationship between academic macroeconomics and reality, and in particular the debate over DSGE modelling and the lack of economics in current policymaking. [1]

I first learnt economics at Cambridge, a department which at that time was hopelessly split between different factions or ‘schools of thought’. I thought if this is what being an academic is all about I want nothing to do with it, and instead of doing a PhD went to work at the UK Treasury. The one useful thing about economics that Cambridge taught me (with some help from tutorials with Mervyn King) was that mainstream economics contained too much wisdom to be dismissed as fundamentally flawed, but also (with the help of John Eatwell) that economics of all kinds could easily be bent by ideology.

My idea that by working at the Treasury I could avoid clashes between different schools of thought was of course naive. Although the institution I joined had a well developed and empirically orientated Keynesian framework [2], it immediately came under attack from monetarists, and once again we had different schools using different models and talking past each other. I needed more knowledge to understand competing claims, and the Treasury kindly paid for me to do a masters at Birkbeck, with the only condition being that I subsequently return to the Treasury for at least 2 years. Birkbeck at the time was also a very diverse department (incl John Muellbauer, Richard Portes, Ron Smith, Ben Fine and Laurence Harris), but unlike Cambridge a faculty where the dedication to teaching trumped factional warfare.

I returned to the Treasury, which while I was away saw the election of Margaret Thatcher and its (correct) advice about the impact of monetarism completely rejected. I was, largely by accident, immediately thrust into controversy: first by being given the job of preparing a published paper evaluating the empirical evidence for monetarism, and then by internally evaluating the economic effects of the 1981 budget. (I talk about each here and here.) I left for a job at NIESR exactly two years after I returned from Birkbeck. It was partly that experience that informed this post about giving advice: when your advice is simply ignored, there is no point giving it.

NIESR was like a halfway house between academia and the Treasury: research, but with forecasting rather than teaching. I became very involved in building structural econometric models and doing empirical research to back them up. I built the first version of what is now called NIGEM (a world model widely used by policy making and financial institutions), and with Stephen Hall incorporated rational expectations and other New Classical elements into their domestic model.

At its best, NIESR was an interface between academic macro and policy. It worked very well just before 1990, where with colleagues I showed that entering the ERM at an overvalued exchange rate would lead to a UK recession. A well respected Financial Times journalist responded that we had won the intellectual argument, but he was still going with his heart that we should enter at 2.95 DM/£. The Conservative government did likewise, and the recession of 1992 inevitably followed.

This was the first public occasion where academic research that I had organised could have made a big difference to UK policy and people’s lives, but like previous occasions it did not do so because others were using simplistic and perhaps politically motivated reasoning. It was also the first occasion that I saw close up academics who had not done similar research but who had influence use that influence to support simplistic reasoning. It is difficult to understate the impact that had on me: being centrally involved in a policy debate, losing that debate for partly political reasons, and subsequently seeing your analysis vindicated but at the cost of people becoming unemployed.

My time at NIESR convinced me that I would find teaching more fulfilling than forecasting, so I moved to academia. The publications I had produced at NIESR were sufficient to allow me to become a professor. I went to Strathclyde University at Glasgow partly because they agreed to give temporary funding to two colleagues at NIESR to come with me so we could bid to build a new UK model. [3] At the time the UK’s social science research funding body, the ESRC, allocated a significant proportion of its funds to support econometric macromodels, subject to competitions every 4 years. It also funded a Bureau at Warwick university that analysed and compared the main UK models. This Bureau at its best allowed a strong link between academia and policy debate.

Our bid was successful, and in the model called COMPACT I would argue we built the first UK large scale structural econometric model which was New Keynesian but which also incorporated innovative features like an influence of (exogenous) financial conditions on intertemporal consumption decisions. [4] We deliberately avoided forecasting, but I was very pleased to work with the IPPR in providing model based economic analysis in regular articles in their new journal, many written with Rebecca Driver.

Our efforts impressed the academics on the ESRC board that allocated funds, and we won another 4 years funding, and both projects were subsequently rated outstanding by academic assessors. But the writing was on the wall for this kind of modelling in the UK, because it did not fit the ‘it has to be DSGE’ edict from the US. A third round of funding, which wanted to add more influences from the financial sector into the model using ideas based on work by Stiglitz and Greenwald, was rejected because our approach was ‘old fashioned’ i.e not DSGE. (The irony given events some 20 years later is immense, and helped inform this paper.)

As my modelling work had always been heavily theory based, I had no problem moving with the tide, and now at Exeter university with Campbell Leith we began a very successful stream of work looking at monetary and fiscal policy interactions using DSGE models. [5] We obtained a series of ESRC grants for this work, again all subsequently rated as outstanding. Having to ensure everything was microfounded I think created more heat than light, but I learnt a great deal from this work which would prove invaluable over the last decade.

The work on exchange rates got revitalised with Gordon Brown’s 5 tests for Euro entry, and although the exchange rate with the Euro was around 1.6 at the time, the work I submitted to the Treasury implied an equilibrium rate closer to 1.4. When the work was eventually published it had fallen to around 1.4, and stayed there for some years. Yet as I note here, that work again used an ’old fashioned’ (non DSGE) framework, so it was of no interest to journals, and I never had time to translate it (something Obstfeld and Rogoff subsequently did, but ignoring all that had gone before). I also advised the Bank of England on building its ‘crossover’ DSGE/econometric model (described here).

Although my main work in the 2000s was on monetary and fiscal policy, the DSGE framework meant I had no need to follow evolving macro data, in contrast to the earlier modelling work. With Campbell and Tatiana I did use that work to help argue for an independent fiscal council in the UK, a cause I first argued for in 1996. This time Conservative policymakers were listening, and our paper helped make the case for the OBR.

My work on monetary and fiscal interaction also became highly relevant after the financial crisis when interest rates hit their lower bound. In what I hope by now is a familiar story, governments from around the world first went with what macroeconomic theory and evidence would prescribe, and then in 2010 dramatically went the opposite way. The latter event was undoubtedly the underlying motivation for me starting to write this blog (coupled with the difficulty I had getting anything I wrote published in the Financial Times or Guardian).

When I was asked to write an academic article on the fiscal policy record of the Labour government, I discovered not just that the Coalition government’s constant refrain was simply wrong, but also that the Labour opposition seemed uninterested in what I found. Given what I found only validated what was obvious from key data series, I began to ask why no one in the media appeared to have done this, or was interested (beyond making fun) in what I had found. Once I started looking at what and how the media reported, I realised this was just one of many areas where basic economic analysis was just being ignored, which led to my inventing the term mediamacro.

You can see from all this why I have a love/hate relationship to microfoundations and DSGE. It does produce insights, and also ended the school of thought mentality within mainstream macro, but more traditional forms of macromodelling also had virtues that were lost with DSGE. Which is why those who believe microfounded modelling is a dead end are wrong: it is an essential part of macro but just should not be all academic macro. What I think this criticism can do is two things: revitalise non-microfounded analysis, and also stop editors taking what I have called ‘microfoundations purists’ too seriously.

As for macroeconomic advice and policy, you can see that austerity is not the first time good advice has been ignored at considerable cost. And for the few that sometimes tell me I should ‘stick with the economics’, you can see why given my experience I find that rather difficult to do. It is a bit like asking a chef to ignore how bad the service is in his restaurant, and just stick with the cooking. [6]

[1] This exercise in introspection is also prompted by having just returned from a conference in Cambridge, where I first studied economics. I must also admit that the Wikipedia page on me is terrible, and I have never felt it kosher to edit it myself, so this is a more informative alternative.

[2] Old, not new Keynesian, and still attached to incomes policies. And with a phobia about floating rates that could easily become ‘the end is nigh’ stuff (hence 1976 IMF).

[3] I hope neither regret their brave decision: Julia Darby is now a professor at Strathclyde and John Ireland is a deputy director in the Scottish Government.

[4] Consumption was of the Blanchard Yaari type, which allowed feedback from wealth to consumption. It was not all microfounded and therefore internally consistent, but it did attempt to track individual data series.

[5] The work continued when Campbell went to Glasgow, but I also began working with Tatiana Kirsanova at Exeter. I kept COMPACT going enough to be able to contribute to this article looking at flu pandemics, but even there one referee argued that the analysis did not use a ‘proper’ (i.e DSGE) model.

[6] At which point I show my true macro credentials in choosing analogies based on restaurants.  

Sunday, 4 September 2016

More on Stock-Flow Consistent models

This is a follow-up to this post, but which is prompted by this Bank of England paper, which builds a stock-flow consistent model for the UK. If you are not familiar with the term ‘stock-flow consistent’ (SFC) then read on, because in a sense this post is all about why I think the way the authors and others define this class of models is misleading.

SFC models are popular with Post-Keynesians, and the definition you find on Wikipedia is “a family of macroeconomic models based on a rigorous accounting framework, which guarantees a correct and comprehensive integration of all the flows and the stocks of an economy.” Now I suspect any mainstream macroeconomists would immediately respond that any DSGE model is also stock-flow consistent in this sense. This point is made in a post by Noah Smith, and it is completely valid, although otherwise I think his account of the weaknesses of SFC models is wide of the mark.

If you think this is a trivial debate about titles, take this description of the pros and cons of SFC compared to DSGE models taken from the paper:


Take the cons (merits of DSGE compared to SFC) first. Number one is almost definitional: DSGE models have to be microfounded, but SFC models start with aggregate relationships. But that is not a defining feature of SFC models, because there is a long tradition of macro modelling that is not microfounded but starts with aggregates, a tradition that begins well before DSGEs with the simultaneous creation of national accounts data, econometrics and Keynesian economics. This tradition goes by many names: ‘Structural Econometric Models’ (SEMs), ‘Cowles Commission’ (favoured by Ray Fair) or most recently ‘policy models’ (see Blanchard). I’ll just call them aggregate models here.
A key question, therefore, is what marks SFC models out from other aggregate models? The authors obviously think there is something, because of their second ‘con’. The third and fourth ‘cons’ are common to many large SEMs. (I once wrote a paper on how to mitigate the first of these problems.) The fifth ‘con’ just follows from the first.

At first sight the sixth ‘con’ does the same, but I would argue that if there is anything that characterises SFCs among aggregate models it is this. Aggregate models would generally involve an extensive discussion of the theoretical origins of the relationships they used, but if this paper is anything to go by that is less true for SFCs. If you think this last point is unfair, look at the discussion of the consumption function (before equation 4).

This failure to acknowledge the existence of other aggregate models is even more apparent among the ‘pros’. The first and second can be true for any model, including a DSGE model, but the third is critical. It is true, but again it is also true for many aggregate and some DSGE models. As I argue in my previous post, the key point about the archetypal DSGE model is that it does not need to track household wealth, because there is no attempt by consumers (given the theory) to achieve some target value of wealth.

The fourth is true for any model, including DSGE models. The fifth is true for any aggregate model as long as expections variables are explicitly identified. The sixth is also almost bound to be true of any aggregate model, because starting with aggregates and being eclectic (and potentially internally inconsistent) with theory allows you to more closely match the data than DSGEs.

To summarise, if you were to ask how this model compares to other aggregate (non-microfounded) models, the answer would probably be that it takes theory less seriously and it has a rather elaborate financial side.

The New Classical counter revolution had many good and bad consequences, but one of the undesirable consequences was, it seems, to define the equivalent of a year zero in macroeconomics, where nothing that was not in the New Classical tradition created before (or even after) this revolution is deemed to exist. The same should not be true for heterodox economists. If you are going to effectively return to a pre-DSGE tradition, please do not pretend that tradition did not exist.

There is a well known UK professor of econometrics who was very fond of admonishing authors who failed to cite work that they were either extending or just copying. The intention here is not just to do the same. One of the big dangers with any kind of elaborate aggregate model is that you can get bizarre model properties from not thinking enough about the theory, or imposing enough because of the theory. Knowing some of the authors I doubt that has happened in this case. But it would be a mistake for others to believe that the properties of their model show the importance of accounting rather than the theory they have used.