Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday, 29 May 2013

Data, Theory and Central Bank Models

As the Bank of England finally publishes [1] its new core model COMPASS, I have been thinking more about what the central bank’s core model should look like. (This post from Noah Smith I think reacts indirectly to the same event.) Here I will not talk in terms of the specification of individual equations, but the methodology on which the model is based.

In 2003, Adrian Pagan produced a report on modelling and forecasting at the Bank of England. It included the following diagram.



One interpretation is that the Bank has a fixed amount of resources available, and so this curve is a production possibility frontier. Although Pagan did not do so, we could also think of policymakers as having conventional preferences over these two goods: some balance between on the one hand knowing a forecast or policy is based on historical evidence and on the other that it makes sense in terms of how we think people behave.

I think there are two groups of macroeconomists who will feel that this diagram is nonsense. The first I might call idealists. They will argue that in any normal science data and theory go together – there is no trade-off. The second group, which is I will call purists, will recognise two points on this curve (DSGE and VARs), but deny that there is anything in between. I suspect most macroeconomists under 40 will fall into this group.

The purists cannot deny that it is possible to construct hybrid models that are an eclectic mix of some more informal theory and rather more estimation that DSGE models involve, but they will deny that they make any sense as models. They will argue that a model is either theoretically coherent or it is not – we cannot have degrees of theoretical coherence. In terms of theory, there are either DSGE models, or (almost certainly) incorrect models.

At the time Pagan wrote his report, the Bank had a hybrid model of sorts, but it was in the process of constructing BEQM, which was a combination of a DSGE core and a much more data based periphery. (I had a small role in the construction of both BEQM and its predecessor: I describe the structure of BEQM here.) It has now moved to COMPASS, which is a much more straightforward DSGE construct. However judgements can be imposed on COMPASS, reflecting a vast range of other information in the Bank’s suite of models, as well as inputs from more informal reasoning.

The existence of a suite of models that can help fashion judgements imposed on COMPASS may guard against large errors, but the type of model used as the core means of producing forecasts and policy advice remains significant. Unlike the idealists I recognise that there is a choice between data and theory coherence in social science, and unlike the purists I believe hybrid models are a valid alternative to DSGE models and VARs. So I think there is an optimum point on this frontier, and my guess is that DSGE models are not it. The basic reason I believe this reflects the need policymakers have to adjust reasonably quickly to new data and ideas, and I have argued this case in a previous post

Yet I suspect it will take a long time before central banks recognise this, because most macroeconomists are taught that such hybrid models are simply wrong. If you are one of those economists, probably the best way I can persuade you that this position is misguided is to read this paper from Chris Carroll. [2] It discusses Friedman’s account of the permanent income hypothesis (PIH). For many years graduate students have been taught that while PIH was a precursor to the intertemporal model that forms the basis of modern macro, Freidman’s suggestion that the marginal propensity to consume out of transitory income might be around a third, and that permanent income was more dependent on near future expectations than simple discounting would suggest, were unfortunate reflections of the fact that he didn’t do the optimisation problem formally.

Instead, Carroll suggests that the PIH may be reasonable approximation to how an optimising consumer might behave as they anticipate the inevitable credit constraints that come with old age. There is also a lot of empirical evidence that consumers do indeed quickly consume something like a third of unexpected temporary income shocks. In other words Friedman’s mix of theoretical ideas and empirical evidence would have done rather better at forecasting consumption than anything microfounded that has supplanted it. If that can be true for consumption, it could be true for every other macroeconomic relationship, and therefore a complete macromodel.


[1] A short rant about the attitude of the Bank of England to the outside world. COMPASS has been used for almost two years, yet it has only just been published. I have complained about this before, so let me just say this. To some senior officials at the Bank, this kind of lag makes perfect sense: let’s make sure the model is fit for purpose before exposing it to outside scrutiny. Although this may be optimal in terms of avoiding Bank embarrassment and hassle, it is obviously not optimal in terms of social welfare. The more people who look at the model, the sooner any problems may be uncovered.  I am now rather in favour of delegation in macroeconomics, but delegation must be accompanied by the maximum possible openness and accountability.

[2] This recently published interview covers some related themes, and is also well worth reading (HT Tim Taylor).

Monday, 27 May 2013

Debating Helicopter Money (while the lunatics continue to run the asylum)

Vox has published an excellent summary account of a debate between Adair Turner and Michael Woodford (skilfully moderated by Lucrezia Reichlin) on helicopter money. My line on helicopter money has been that it is formally equivalent to fiscal expansion coupled with an increase in the central bank’s inflation target (or whatever nominal target it uses), and so adds nothing new to current policy discussions.  I think the Woodford/Turner debate confirms that basic point, but as this may not be obvious from the discussion (it was a debate), let me try to make the argument here.

Two Equivalences and two Red Herrings

Turner calls his proposal ‘Outright Money Financing’ (OMF), so let’s use that term to avoid confusion with other versions of helicopter money. Under OMF, the central bank would decide to permanently print a certain amount of extra money, which the government would spend in a way of its choosing. The alternative that Woodford proposes is that the government spends more money by issuing debt, but the central bank buys that debt by printing more money (Quantitative Easing), gives any interest it receives straight back to the government, and promises to ‘never’ sell the debt. (If it reaches maturity, it uses the proceeds to buy more.) Let’s call this Indirect Money Financing (IMF). If everyone realises what is going on in each case, and policymakers stick to their plans, the two policies have the same impact.  That is the first equivalence, which both Woodford and Turner seem to be happy with.

In both cases, more base money has been permanently created. This will have implications for the inflation or NGDP level that the central bank attempts to achieve. If you believe that in the long run there is a stable relationship between the amount of base money in the economy and the price level, then permanently printing more base money must raise inflation for a time at some point. In other words, the central bank cannot independently control both base money and inflation. This leads to a second equivalence: we can either talk about long run levels of base money or average future levels of inflation: one is implied by the other. [1]

The point of either OMF or IMF is to combine short run fiscal stimulus with raising the long run level of prices. Sometimes advocates of helicopter money suggest that there is no reason for long run prices to be higher. However, as long as there is a link between how fiscal deficits are financed and the long run price level, to keep average inflation constant requires that money financing is temporary, so in that sense it is even closer to current Quantitative Easing. Much of the discussion below is relevant to that case too.

Now the red herrings. First, although Woodford argues in terms of nominal GDP targets rather than inflation targets, the distinction is irrelevant to this debate, if both targets are perfectly credible (see more below). Under both schemes the central bank has some form of nominal target, and its money creation has to be consistent with this. Second, this particular debate has nothing to do with the form of fiscal expansion: under Turner’s proposal the central bank decides the aggregate amount of OMF, but the government directs where the helicopter distributes its money, which could be over schools and hospitals, the population as a whole, or just tax payers.

How can the two proposals differ?

So if the two policies can be formally equivalent, where are the differences? The first point to make is that if the government and central bank were a single entity, there would be no difference at all. Under IMF the government would be selling debt to itself. So any difference has to involve the fact that the central bank is an independent actor.

The promise to raise future inflation to stimulate the economy today suffers from a well known time inconsistency problem. The promise works if it is believed, but when the future comes there is an incentive to go back on the promise. So how likely is it that the central bank will take that incentive? Using the second equivalence noted above, let’s talk about the central bank going back on its promise to make money creation permanent. One argument for OMF is that it may be easier for the central bank to do this if it can just sell some of its government debt. On the other hand, if the only way of taking money out of the system is by persuading the government to raise taxes (or cut spending), that seems less likely. Thus OMT is a commitment device for the central bank not to renege on future inflation targets which sophisticated agents may recognise. This argument seems a little tenuous to me, as it assumes that OMF takes place to such an extent that the central bank in effect loses the ability to raise short term interest rates sufficiently to control inflation. I cannot see any central bank willingly undertaking this amount of OMF.

On the fiscal side, agents may base their assessment of future tax liabilities on the published government debt numbers, and fail to account for the additional future revenue the government will receive from the central bank as it passes the interest on its debt back to the government. Here agents are sophisticated enough to base their spending plans on an assessment of future tax liabilities, but naive about how these liabilities are calculated. Possible I suppose, but then the government just needs to start publishing figures for debt held by the private sector excluding the central bank.

A naive government may feel constrained by its fiscal targets and so feel it is unable to undertake bond financed fiscal stimulus, but may be prepared to contemplate money financed fiscal stimulus. That seems quite plausible, until you note that under the second equivalence above, any switch from bond to money financing that was not reversed should be coupled with a temporary increase in the central bank’s inflation target.  A temporary period of OMF that was later undone would not raise average future inflation, but it would equally do nothing to change long run levels of debt either. In that case surely everyone would just start counting OMF as (soon to be) debt.

Turner argues that OMF would discipline governments more than IMF, because central banks would determine the amount of OMF. This argument also seems strained. For example in the UK, where the government sets the inflation target and we have Quantitative Easing, the problem is that the government is borrowing too little, not too much. Woodford worries that OMF blurs the lines between who take fiscal and monetary policy decisions, which is why he prefers IMF. However if the government decides how OMF is spent, and retains control over aggregate borrowing, it is difficult to see the force of this argument.

So I think OMF and IMF are pretty well equivalent. As I’m in favour of IMF (fiscal expansion and more future inflation), then this implies I am in favour of OMF.  If my earlier posts have appeared critical, I think that is because some proponents of helicopter money seemed to deny the equivalence to IMF. However, if pretending that helicopter money is monetary rather than fiscal policy could convince some policymakers to change course, maybe the end justifies the means. Unfortunately Eurozone consolidation continues unchecked, the UK government brushes aside advice from the IMF to relax austerity, and the US recovery is dampened as sequester bites. The intellectual case against austerity is overwhelming (beside the Krugman piece I mentioned in my last post, see also Martin Wolf here), more and more academics are suggesting we need higher inflation (to take just five: Mankiw, Rogoff, Krugman, Ball, Crafts), yet only Japan has been moved to change course. The lunatics and asylum jibe is of course unfair, but just how long will it take policymakers to start addressing the problems of today, rather than the half imagined problems of the past.

[1] Some comments on earlier posts have disputed this point. There is not space to discuss this here, as we need to consider not only the detailed mechanics of monetary policy but also the nature of money itself.  Obviously if the long run price level is really independent of how deficits are financed, then much of the discussion here becomes unnecessary.


Thursday, 23 May 2013

The Liquidity Trap and Macro Textbooks


Over the last eleven days something unusual has happened – I have not only failed to post a blog of my own, but I have not even read anyone else’s posts. Instead I have taken advantage of a sabbatical term to take a break [1] in Umbria, during a time of year when it is still cool enough to walk, but not too cold in the Piano Grande. Before I left I did write a couple of things that I thought I might quickly post while away, but with the help of the Italians’ penchant for starting dinner late and eating four (or more) courses that idea somehow got lost.  

So I’m spending part of today catching up, and reminding myself why Paul Krugman and Martin Wolf are such great writers. (For example, from the former, a masterful analysis of the decent into worldwide austerity, and from the latter, a perfect short account of why when it comes to government debt the Eurozone really is different.) What I want to pick up on here is this Krugman post, where he questions the description in a Nick Crafts piece of higher inflation as a way out of the liquidity trap as being ‘textbook’. (See also Ryan Avent.)

So is raising inflation expectations to avoid the liquidity trap textbook or not? Let’s take the 2000 edition of the best selling undergraduate macro textbook. Here ‘liquidity trap’ does not appear in the index. There is a page on Japan in the 1990s, and in that there is one paragraph on how expanding the money supply, even if it was not able to lower interest rates, could by raising inflation expectations and therefore reducing real interest rates stimulate demand. One paragraph among 500+ pages is not enough to make something ‘textbook’, so it seems as if Paul Krugman has a point.

Yet how can this be? It is not one of those cases where textbooks struggle to catch up with recent events, because the Great Depression was a clear example of the liquidity trap at work. How can perhaps the major macroeconomic event of the 20th century, which arguably gave rise to the discipline itself, have so little influence on how monetary policy is discussed? Yet it is possible to argue that the discussion is there, in an oblique form. A standard way of analysing the Great Depression within the context of IS-LM, which this popular textbook takes, is to contrast the ‘spending hypothesis’ with the ‘money hypothesis’: was the depression an inevitable result of a negative shock to the IS curve, or as Freidman argued could better monetary policy have prevented this shock hitting output?

A standard objection to the money hypothesis is that nominal interest rates did (after a time) fall to their lower bound. The counterargument – which the textbook also suggests - is that, if the money supply had not contracted, long run neutrality would imply that eventually inflation would have to have been higher, and therefore real interest rates on average would be lower. So in one way the story about how higher inflation could avoid a slump is there.

What is missing is the link with inflation targeting. Because textbooks focus on the fiction of money supply targeting when giving their basic account of how monetary policy works, and then mention inflation targeting as a kind of add-on without relating it to the basic model, they fail to point out how a fixed inflation target cuts off this inflation expectations route to recovery. Quantitative Easing (QE) does not change this, because without higher inflation targets any increase in the money supply will not be allowed to be sustained enough to raise inflation. In this way inflation targeting institutionalises the failure of monetary policy that Friedman complained about in the 1930s. Where most of our textbooks fail is in making this clear.    


[1] Sometimes known as holidays, these are things that we Europeans are forced to take many more of than Americans, leading to great frustration and misery (or maybe not).

Friday, 10 May 2013

Sheedy on NGDP targeting and debt contracts


An argument that is sometimes made for a monetary policy that targets a path for nominal GDP (NGDP) is that it reduces risk for most borrowers who take out debt contracts with repayments fixed in nominal terms (see, for example, Nick Rowe here). However, as far as I am aware, this argument has not been quantified in a way that allows it to be compared with the more familiar benefits of inflation targeting. A recent paper by Kevin Sheedy does just that.

Before getting to the punchline, it is worth setting out the argument more precisely. A good deal of the borrowing that goes on in the economy is to smooth consumption over the life cycle. We borrow when young and incomes are low, and pay back that borrowing in middle age when incomes are high. To do this, we almost certainly have to borrow using a contract that specifies a fixed nominal repayment. The problem with this is that our future nominal incomes are uncertain - partly for individual reasons, but also because we have little idea how the economy as a whole is going to perform in the future. If the real economy grows strongly, and our real incomes grow with it, repaying the debt will be much easier than if the economy grows slowly.

As most individuals are risk averse, this is a problem. In an ‘ideal’ world this could be overcome by issuing what economists call state contingent contracts, which would be a bit like a personal version of equities issued by firms. If economic growth is weak, I have a contract that allows me to reduce the payments on my debt. However most people cannot take out debt contracts of that kind, or insure against the aggregate risk involved in nominal debt contracts. We have what economists call an incomplete market, which imposes costs.

Monetary policy can reduce these costs by trying to stabilise the path of nominal GDP, because it reduces the risks faced by borrowers. Of course monetary policy cannot remove the uncertainty about real GDP growth, but if periods of weak growth are accompanied by periods of moderately higher inflation, then this is not a problem from the borrower’s point of view. (Koenig discusses this in detail in a paper here.)

How do we quantify this benefit of NGDP targeting, and compare it to the benefits of inflation targets? Sheedy defines a ‘natural’ debt to GDP ratio, which is the private debt to GDP ratio that would prevail if financial markets were complete. Under certain conditions the natural debt to GDP ratio is likely to be constant, and Sheedy suggests that departures from this benchmark are unlikely to be great. So a goal for monetary policy could be to close as far as possible the gap between the actual and natural debt to GDP ratio, in an analogous way to policy trying to close the output gap. To do this it would target the path of NGDP.

The current standard way of modelling the welfare costs of inflation, due to Woodford, is to measure the cost of the distortion in relative prices caused by prices changing at different times to keep up with aggregate inflation. This suggests monetary policy should have an inflation target rather than a NGDP target. (I note here that typically in the literature these costs are far greater than costs associated with output gaps.) What Sheedy does is set up a model which has these costs of inflation present, but also has the costs of nominal debt contracts discussed earlier. With these two different goals, an optimal monetary policy will go for some combination of inflation targeting and NGDP targeting. The key question is which kind of costs are more important. [1]

Sheedy’s answer is that the costs of nominal debt contracts are more important. The optimal monetary policy gives a 95% weight to the NGDP target, and just 5% to the inflation target. Now of course this is just one result from a highly stylised model, and Sheedy shows that it is sensitive to assumptions about the duration of debt contracts and the degree of risk aversion. Nevertheless it is very interesting result.

A very simplistic way of describing why this may be very important is as follows. If the focus of monetary policy is always on the cost of inflation, NGDP targets will appear to non-economists at least (e.g. politicians) to be second best. They are a nominal anchor, so we will not get runaway inflation or deflation by adopting them, but why not just target inflation directly? Who cares about nominal GDP anyway? This paper suggests a simple answer - borrowers care. If we see monetary policy has being important to the proper functioning of financial markets, as we now do, then reducing the risk faced by borrowers is a legitimate goal for policy. It may make sense for inflation to be high when real growth is low, and vice versa, because this reduces the risks faced by borrowers. I think a politician that was not beholden to creditors could sell that.


[1] For those interested in government debt, there is an interesting parallel in the literature. Some authors (e.g. Chari, V. V. and Kehoe, P. J. (1999), “Optimal fiscal and monetary policy", in J. B. Taylor and M. Woodford (eds.), Handbook of Monetary Economics, vol. 1C, Elsevier, chap. 26, pp.1671-1745.) developed the idea that nominal government debt contracts could be a useful way of avoiding costly changes in distortionary taxes following fiscal shocks, because inflation could change real debt. However Schmitt-Grohe and Uribe (Schmitt-Grohe, S. and Uribe, M.  (2004), “Optimal fiscal and monetary policy under sticky prices", Journal of Economic Theory, 114(2):198-230) showed that once you added in nominal rigidity to the model so that inflation was costly, inflation costs dwarfed any gains. This example makes the fact that we get the opposite result with private debt contracts particularly interesting, although as Sheedy and others have noted, this may be partly because this earlier literature assumed short maturity government debt.
 




Tuesday, 7 May 2013

Is UKIP the UK's Tea Party?


The UK Independence Party (UKIP) received a quarter of the vote in the recent council elections. UKIP has been described as the UK equivalent of the Tea Party movement in the US. Its policies are certainly very much to the right. Large tax cuts (possibly one flat rate) financed by huge reductions in public spending, except for defense spending which would increase, a five year freeze in immigration, a return to coal based power stations and no more wind farms, more prison places, and so on. Unlike the Tea Party, UKIP currently fights against the Conservative Party rather than working within it, but too much should not be made of this difference. UKIP’s leader says that he could cooperate with the Conservatives if they ditched Cameron as their leader, and there is some sympathy for UKIP’s aims within the Conservative Party.

One common reading of UKIP’s rise is that it represents the disaffected right wing of conservatism, generated by Cameron’s attempt to move the Conservative Party to the centre ground. However there is one problem with this interpretation - the Conservative Party has not moved to the centre ground. Apart from the odd token issue like gay marriage, the supposed move to the centre was just spin. Under the cloak of the need to reduce debt, the government has embarked on a programme to shrink the size of the state that goes well beyond anything attempted by Margaret Thatcher. Reforms to the National Health Service and education involve the large scale private provision of services or governance.

In terms of the distribution of income, the Conservative Party seem happy with increasing inequality and poverty. The 50% tax band on top incomes was reduced to 45%, while welfare spending is cut. The impact on child poverty will emulate what Margaret Thatcher achieved, according to the latest analysis from the Institute of Fiscal Studies (IFS) released today. Updating figures that I reported recently to take account of the latest welfare cuts, the IFS estimate that the percentage of children in poverty (in families with incomes 60% below the median) will rise from 17.5% in 2010 to 23.5% by 2020. Crucially, according to the IFS the rise in this measure of relative child poverty is entirely the result of the tax and benefit reforms introduced by this government. This is, quite literally, a government created increase in child poverty.

So if we have a government where major economic and social policies are very much to the right of the political spectrum, how do we account for the rise of UKIP? One possible story is that the Conservatives are a victim of their own spin. In trying to ditch the image of the ‘nasty’ party that came to be associated with the Thatcher era, they have alienated those that rather liked her openly right wing style. The parallels with the US may be quite close. Where the Tea Party is associated with, and definitely encouraged by, Fox News and talk radio, so the values of UKIP are also the values of the UK tabloid press, by which we essentially mean Murdoch’s Sun and the Daily Mail.

Yet there is one key feature of UKIP which has no US parallel, and that is in its name - independence from the European Union (EU). Since Thatcher’s time the Conservative Party has been seriously split in its attitude to the EU. The leadership knows that a decision to exit the EU will almost certainly have serious negative consequences for the economy, which is why most business leaders would be horrified by the prospect of leaving. However the tabloid press have run a relentless campaign to highlight any negative aspects of EU membership. It has to be said that Eurozone governance in particular gives them plenty to work with, but when that is not enough, then in true tabloid style stories are made up. While the Conservative leadership seems in its element in going along with the tabloid attack on the welfare state, when it comes to the EU, the game becomes appeasement.  

How all this turns out may depend on how the economy performs over the next two years, and how effective the Labour Party are at shifting the public debate. If the recovery is strong, and we see more of this kind of political naivety from Labour (yes, Ed did not read my post on the B word), then the tabloids will push Cameron as far as they can on Europe, but then rally round to support the Conservatives at the election. On the other hand if the recovery is weak, and if Labour is more effective (like here) and looks like winning, then the tabloids may continue to encourage UKIP, in the hope of engineering a reconciliation of the right under a new Conservative opposition leader hostile to the EU.

But these are dangerous games. Much of UKIP’s popularity comes from straightforward dissatisfaction with falling living standards and a lack of good jobs. It is expressed as hostility to immigration and welfare recipients, in part because the tabloid press suggest this is the cause of these problems, rather than it being the result of government economic policy [1]. Even if the recovery is strong and Labour opposition weak, this may not produce enough to enable the tabloids to put this genie back in its bottle. They may find it as difficult to control those who recently voted UKIP as the Republican Party has found it impossible to control the Tea Party.

[1] Some good posts on these tabloid myths: NEF here, the Guardian here, Ian Mulheirn here, and much more detail from Alex Marsh here.   

Monday, 6 May 2013

More on Naive Fiscal Cynicism


Paul Krugman is absolutely right that one of the rationales for the IMF and others framing fiscal policy in terms of the ‘speed of consolidation’ is a belief that left to themselves politicians will always and everywhere let debt rise. As he points out, the facts for the US tell a rather different story. Here I want to add a couple of additional points by looking at experience outside the US.

As should be well known by now, UK government debt was over a 100% of GDP between the wars, but declined rapidly and consistently to 50% from the second war to the mid 1970s. (See here, or this useful UK site.) The chart below, based on OBR data, takes up the story since then.

UK Debt to GDP ratio (%)



So perhaps the simplest description is that debt to GDP began to level off at around 40% of GDP (which was the target from 1998 to 2008), until the Great Recession hit. There is a lot of interesting detail here, but that would distract from the main point, which is that the UK is another clear counterexample to the idea that governments always let their debt rise.



However deficit bias is not a figment of international policymakers imagination. As I note here with Lars Calmfors, and others have noted before us, government debt in the OECD area as a whole almost doubled (40% to 75%) between the mid-70s and the mid-90s. This deficit bias came not from the UK, and not much from the US, but from Europe and Japan. (To see data on individual countries or country groups, see this nice IMF resource. [1])

So the only reasonable conclusion is that deficit bias is a problem, but not one that afflicts every government at all times. As Lars and I document, there have been various studies that have attempted to draw lessons from this diversity of experience: for example coalition governments may be more prone to deficit bias, but institutional set-ups where the finance ministry is strong less prone. My own support for fiscal councils partly stems from a desire to look for an institutional mechanism to help counter deficit bias. There is no magic bullet here, and institutional solutions will differ depending on national constitutional characteristics.

If this last point seems obviously reasonable, then lets change the dimension from across countries to across time and states of the world. While we may observe a tendency towards deficit bias in normal times, in times like now when government debt is high we seem to be observing a quite different bias - a bias towards austerity. The apparent consensus of 2008/9 that we needed fiscal stimulus looks like the aberration rather than the rule. An international organisation wanting to push sound economics needs to be doing more than arguing for a slower speed of consolidation. By advocating fiscal consolidation when the opposite is required you risk discrediting your advice at other times.


Here is another analogy. Doctors quite rightly encourage us to take more exercise. That is because we have a tendency to sit around too much, but this bias is not universal across people types or ages. Yet when we catch flu, the doctor prescribes rest, not exercise. Indeed, a good doctor may well be specific about the length of rest required, because they know too many patients may think they are better before they have fully recovered. If a doctor told us that while we had flu we should cut down from 30 minutes exercise to 15 or 10, we might begin to doubt their credentials.  

[1] To get the debt series, click on the blue subheading ‘Real GDP growth’.


Saturday, 4 May 2013

Blanchard on Fiscal Policy


I was recently rather negative about the way the IMF frames the fiscal policy debate around the  right speed of consolidation. In my view this always prioritises long run debt control over fiscal stimulus at the zero lower bound (ZLB), and so starts us off on the wrong foot when thinking about the current conjuncture. Its the spirit of 2011 rather than the spirit of 2009.

Blanchard and Leigh have a recent Vox post, which allows me to make this point in perhaps a clearer way, and also to link it to a recent piece by David Romer. The Vox post is entitled “fiscal consolidation: at what speed”, but I want to suggest the rest of the article undermines the title. The first three sections are under the subtitle “Less now, more later”. They discuss the (now familiar from the IMF) argument that fiscal multipliers will be significantly larger in current circumstances, the point that output losses are more painful when output is low, and the dangers of hysteresis. I have no quarrel with anything written here, except the subtitle, of which more below.

A more interesting section is the one subtitled “More now, less later”. This section starts by noting that the textbook case for consolidation is that high debt crowds out productive capital and increases tax distortions. Yet these issues are not discussed further. The article does not say why, but the reason is pretty obvious. While both are long term concerns, they are not relevant at the ZLB.

Instead the section focuses on default, and multiple equilibria. After running through the standard De Grauwe argument, the text then says: “This probably exaggerates the role that central banks can play: Knowing whether the market indeed exhibits the good or the bad equilibrium, and what the interest rate associated with the good equilibrium might be is far from easy to assess, and the central bank may be reluctant to take what could be excessive risk onto its balance sheet.” This is more a description of ECB excuses before OMT than an argument.

More interesting is what comes next. Does default risk actually imply more austerity now, less later? I totally agree with the following: “The evidence shows that markets, to assess risk, look at much more than just current debt and deficits. In a word, they care about credibility.” “How best to achieve credibility? A medium-term plan is clearly important. So are fiscal rules, and, where needed, retirement and public health care reforms which reduce the growth rate of spending over time. The question, in our context, is whether frontloading increases credibility.”

So here we come to a critical point. Does more now, less later, actually increase the credibility of consolidation? If it does not, then the only argument for frontloading austerity disappears. The next paragraph discusses econometric evidence from the crisis, and concludes it is ambiguous. The whole rationale for more now, less later, is hanging by a thread. And there is just one paragraph left! Let me reproduce it in full.

“The econometric evidence is rough, however, and may not carry the argument. Adjustment fatigue and the limited ability of current governments to bind the hands of future governments are also relevant. Tough decisions may need to be taken before fatigue sets in. One must realise that, in many cases, the fiscal adjustment will have to continue well beyond the tenure of the current government. Still, these arguments support doing more now.”

Is this paragraph intentionally weak and contradictory? If credible fiscal adjustment requires consolidation by future governments, why does doing more now add to credibility? You could equally well argue that overdoing it now, because of the adverse reaction it creates (‘fatigue’ !?), turns future governments (and the electorate) away from consolidation, and so it is less credible.

So what we have is an article that appears to be a classic ‘on the one hand, on the other’ type, but is in fact a convincing argument for ‘less now, more later’. Perhaps that is intentional. But even if it is, I’m still unhappy. Although the arguments on multipliers, output gaps and hysteresis appear under the subtitle ‘less now, more later’, they in fact imply ‘stimulus now, consolidation later’, once you take the ZLB seriously. If you are walking along a path, and there is a snake blocking your way, you don’t react by walking towards it more slowly!

Why does this matter? Let me refer to recent comments David Romer made about the ‘Rethinking Macro’ IMF conference, which he suggests avoided the big questions. For example he notes “I heard virtually no discussion of larger changes to the fiscal framework.” He goes on (my italics)

“Another fiscal idea that has received little attention either at the conference or in the broader policy debate is the idea of fiscal rules or constraints. For example, one can imagine some type of constitutional rule or independent agency (or a combination, with a constitutional rule enforced by an independent agency) that requires highly responsible fiscal policy in good times, and provides a mechanism for fiscal stimulus in a downturn that is credibly temporary.”
As I argued here, it is not a matter of having a fiscal rule for consolidation that allows you to just ease up a bit at the ZLB. What we need is a rule that obliges governments to switch from consolidation to stimulus at or near the ZLB. Otherwise, the next time a large crisis hits (and Romer plausibly suggests that could be sooner rather than later), we will have to go through all of this stuff once again.

Microfounded Social Welfare Functions


More on Beauty and Truth for economists


I have just been rereading Ricardo Caballero’s Journal of Economic Perspectives paper entitled “Macroeconomics after the Crisis: Time to Deal with the Pretense-of-Knowledge Syndrome”. I particularly like this quote:

The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer.


I thought of this when describing here (footnote [5]) Woodford’s derivation of social welfare functions from representative agent’s utility. Although it has now become a standard part of the DSGE toolkit, I remember when I had to really work through the maths for this paper. I recall how exciting it was, first to be able to say something about policy objectives that was more than ad hoc, and secondly to see how terms in second order Taylor expansions nicely cancelled out when first order conditions describing optimal individual behaviour were added.

This kind of exercise can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.


Where does this result come from? The inflation term in Woodford’s derivation of social welfare comes from relative price distortions when prices are sticky due to Calvo contracts. Let’s assume for the sake of argument that these costs are captured correctly. The output gap term comes from sticky prices leading to fluctuations in consumption and fluctuations in labour supply. Lucas famously argued [1] that the former are small. Again, for the sake of argument lets focus on fluctuations in labour supply.
Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations: agents are now working more or less as a result of fluctuations in labour demand, and inelasticity means that these fluctuations are more costly in terms of utility. Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.

Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.

So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.

Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.

Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero again

“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”

As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.

[1] Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.