Winner of the New Statesman SPERI Prize in Political Economy 2016


Saturday, 30 August 2014

The ECB and the Bundesbank

There can be no doubt that some of the responsibility for the current Eurozone recession has to be laid at the feet of the ECB. Some of that might in turn be due to the way the ECB was set up. Specifically

1) That the ECB sets its own definition of price stability

2) This definition is asymmetric (below, but close to, 2%)

3) No dual mandate, or even acknowledgement of the importance of the output gap

4) Minimal accountability, because of a concern about political interference

It is generally thought that the ECB was created in the Bundesbank’s image. Tony Yates goes even further back in this post. Yet the irony is that the ECB abandoned the defining feature of Bundesbank policy, which could be providing significant help in current circumstances.

The defining feature of Bundesbank policy was a money supply target. Whereas the UK and US experience with money supply targeting was disastrous and short lived, the Bundesbank maintained its policy of targeting money for many years. There is little doubt that this was partly because the Bundesbank was in practice quite flexible, and the money target was often missed. Nevertheless the Bundesbank felt that maintaining that money target played an important role in conditioning expectations, and there is some evidence that it was correct in believing this.

When the ECB was created, it adopted a ‘twin pillar’ approach. The first pillar was the inflation target, and the second pillar involved looking at money. It was generally thought that the second pillar was partly a gesture to Bundesbank practice, and subsequently most analysis has focused on the inflation target.

There are very good reasons for abandoning money supply targets: they frequently send the wrong signals, and are generally unreliable in theory and practice. However a monetary aggregate should be related to nominal GDP (NGDP), and you do not need to be a market monetarist to believe there are much better reasons for following a NGDP target. What a NGDP target does for sure is make you care about real GDP, which would go a long way to correcting points (2) and (3) above. What it can also do, if you target a path for the level of NGDP, is provide a partial antidote for a liquidity trap, as I discuss here. More generally, it can utilise most effectively the power of expectations, which is why perhaps the most preeminent monetary economist of our time has endorsed them.

So do not blame the Bundesbank for the flawed architecture of the ECB. The ECB abandoned the critical aspect of Bundesbank policy, which was to target an aggregate closely related to nominal GDP. ECB policy has suffered as a result.

Friday, 29 August 2014

Eurozone delusions

I have already had a number of interesting comments on my previous post which illustrate how confused the Eurozone macroeconomic debate has become. The confusion arises because talk of fiscal policy reminds people of Greece, the bailout and all that. That is not what we are talking about here. We are talking about what happens when the Eurozone’s monetary policy stops working.

If Eurozone monetary policy was working, the Eurozone would be experiencing additional (monetary) stimulus everywhere, and average inflation would be 2%. Because Germany through 2000 to 2007 had an inflation rate below that in France and Italy, it now has to have an inflation rate above these countries. Something like 3% in Germany and 1% in countries like France and Italy for a number of years. If ECB monetary policy was working, Germany would get no choice in this, because it is part of what they signed up to when joining the Euro.

Monetary policy is not working because of the liquidity trap, so we instead have average Eurozone inflation at about 0.5%, with Germany at 1% and France/Italy at nearer zero. That implies a huge waste of Eurozone resources. That waste can be avoided, in a standard textbook manner, by at least suspending the Stability and Growth Pact (SGP), and preferably by a coordinated fiscal stimulus.

Why is this not happening? There are two explanations: ignorance or greed. Ignorance is a non-scientific belief that fiscal stimulus cannot or should not substitute for monetary policy in a liquidity trap. Greed is that Germany wants to avoid having 3% inflation, because it controls fiscal policy.

Those that say that Germany would be ‘helping out’ France and Italy by agreeing to suspend the SGP and enact a stimulus therefore have it completely wrong. If things were working normally, Germany would be getting a (monetary) stimulus, whether it liked it or not. What Germany is doing is taking advantage of the fact that monetary policy is broken, at the rest of the Eurozone’s expense. Germany gains a small advantage (lower inflation), but the Eurozone as a whole suffers a much larger cost.

Often greed fosters ignorance. It is unfortunate but not surprising that many in Germany think this is all about Greece and transfers and structural reform, because that is what they keep being told. How many of its leaders and opinion makers understand what is going on but want to disguise the fact that Germany is taking advantage of other Eurozone members I cannot say. What is far more inexplicable is that the rest of the Eurozone is allowing Germany to get away with it.
       

Thursday, 28 August 2014

Lessons of the Great Depression for the Eurozone

It is easier to consider the problems of the Eurozone by first thinking about the Eurozone as a whole, and then thinking about distribution between countries. In both cases, the Eurozone is making exactly the same mistakes that were made in the Great Depression of the 1920s/30s.

The Eurozone is currently suffering from a chronic lack of aggregate demand. The OECD estimates an output gap of nearly -3.5% in 2013. Monetary policy is either unable or unwilling to do much about this, so fiscal stimulus is required. This is the first lesson from the Great Depression that is being ignored. Instead of stimulus we have austerity imposed by the Stability and Growth Pact (SGP).

Within the Eurozone, we have a problem created by Germany undercutting pretty well every other economy in the 2000-2007 period. I am not suggesting this was a deliberate policy, but the consequences were not appreciated by any Eurozone government at the time. Some correction has occurred since 2007, but it is incomplete. The second lesson of the Great Depression and the Gold Standard is that achieving correction through deflation and trying to cut wages is both hard and unnecessarily painful.

The solution eventually arrived at in the 1920s/30s was a series of devaluations (leaving the Gold Standard). That is not possible within the Eurozone. However adjustment is much less costly if it is achieved by raising prices in the country that is too competitive, rather than reducing prices in those that are uncompetitive. In practical terms we are not talking about very much here: a period with inflation in Germany at 3%, and at a little above 1% elsewhere, should be sufficient. Instead we now have inflation in Germany of 1% and in the rest of the Eurozone only a little above zero.

Relative unit labour costs (2000=100): source, OECD Economic Outlook May 2014

At this point a sort of moral indignation overcomes economic logic in the debate. Many Germans say why should we suffer 3% inflation to help put right irresponsible policy elsewhere? This is illogical, because it sees inflation below the Eurozone average as a virtue rather than a sin. A country within a monetary union obtaining inflation below the average (as Germany did in the early 2000s) is not a sign of virtue but a sign of a problem, just as it is for other union members to exceed the average.

A country cannot undercut its competitors forever. Any country experiencing below average Eurozone inflation should expect that this will be followed at some point by above average inflation. If the Eurozone could achieve average 2% inflation over the next few years that would mean 3% inflation in Germany - that is part of the Euro contract. To the extent that German policymakers attempt to renege on this contract by either preventing the ECB using unconventional means to achieve its target, or insisting on maintaining the deflationary SGP, then they become directly responsible for the misery that the Eurozone is currently going through.

I have not mentioned at any point levels of debt or structural reforms. Both are distractions for the current problem of inadequate demand and below target inflation. They are relevant only in that they allow policymakers to distract attention from the basic issues. Two of the major lessons of the Great Depression are to use fiscal stimulus to get out of a liquidity trap, and that it is far too painful to insist that uncompetitive countries should bear all the costs of readjustment. The Eurozone has failed to learn either lesson.

  

Wednesday, 27 August 2014

Filling the gap: monetary policy or tax cuts or government spending

Suppose there is a shortfall in aggregate demand associated with a rise in involuntary unemployment in a simple closed economy with no capital. Do we try and raise private consumption (C) or government consumption (G)? If the former, why do we prefer to use monetary policy rather than tax cuts?

If consumers have stable preferences over privately and publicly produced goods, then ideally we want to keep the ratio C/G at its optimal level. So if the aggregate demand gap is caused by a sudden fall in C, we will want to do something to raise C. As real interest rates are the price of current versus future consumption, the obvious first best policy is to set nominal interest rates to achieve the real interest rate that gets C to a value that eliminates the consumption shortfall. That is the basic intuition behind the modern preference to use monetary policy as the stabilisation instrument of choice: part of what I have called the consensus assignment.

In classical or real business cycle models this happens by magic. It normally goes by the term ‘price flexibility’, but it is magic because it is rarely explained how a lack of aggregate demand gets translated into lower real interest rates. In the real world, the magicians are central banks. Note that I have not mentioned anything about implementation lags associated with monetary or fiscal policies, which is one of the reasons you will find in the textbooks for the consensus assignment. My reason for preferring monetary policy is more intrinsic than that.

What happens if the aggregate demand shortfall occurs because ‘supply’ increases through technical progress? Once again the first best policy is to lower interest rates to increase consumption, but we would also want to raise public consumption to keep the optimal C/G ratio.

Finally consider a more difficult shock - a ‘cost-push’ shock to the Phillips curve that raises inflation for a given level of output and aggregate demand. We know that we want policy to reduce output (to create a negative demand gap) to partially reduce inflation, assuming that both the output gap and inflation are costly. However it is less obvious in this case that monetary policy is first best. However, as Fabian Eser, Campbell Leith and I showed in this paper, it still is. It turns out we can complicate the model in some ways (but not others) and the result that we use just monetary policy to maximise social welfare still holds.  

If we return to the case of a demand gap caused by a fall in consumption, suppose we cannot use monetary policy because nominal rates are stuck at zero. As we want to increase private consumption, the obvious alternative to try is a tax cut. If we had access to a lump sum tax (a tax that is independent of income, like the poll tax), and if consumers responded to a tax cut, then this would work pretty well too. There are two problems: Ricardian Equivalence, and there are no lump sum taxes.

If Ricardian Equivalence held completely tax cuts would be totally ineffective at stimulating consumption, but the consistent evidence is that Ricardian Equivalence does not hold. But this evidence does suggest that at least half and perhaps more of any tax cut would be saved, which means that tax cuts would have to be relatively large in money terms compared to the consumption gap. It also adds a degree of uncertainty to their effectiveness. If there is some financial limit on the size of any stimulus package (as often seems to be the case), this puts tax changes that rely on income effects at a severe disadvantage. Even if financial limits are not present, the relative ineffectiveness of tax cuts in stimulating consumption is a problem for another reason.

Lump sum taxes do not exist, so some distortionary tax (a tax that influences incentives) has to be used. This means that a tax cut violates tax smoothing. This is the idea that the best policy is to keep tax distortions constant. A tax rate of 30% is better than a tax rate of 10% in odd years, and 50% in even years. So filling the consumption gap with a cut in the income tax rate (to be followed by increases in that rate) has a cost. The more tax cuts are saved, the bigger the cost. It is highly unlikely that this cost will be sufficient to stop us trying to fill the consumption gap, because unemployment costs are far greater than uneven tax distortions. However there are costs, unlike the first best of changing real interest rates.

In contrast, using public spending to fill any demand gap is much more straightforward, as its impact on demand and employment is more predictable. But it too has a cost: we get the C to G balance wrong (too much G compared to C). Chris House has a recent post on tax cuts versus government spending as alternative means of fiscal stimulus. (Noah Smith wrote a subsequent post and Chris responded.) The proposition he wants to put forward is that government spending should only be used as a stimulus measure if its social benefits outweigh its social costs. I’m not sure that is a very helpful way of thinking about it. Far better, in my view, is to accept that the demand gap must be plugged (because the costs of not doing so are very large), and then work out the way of doing that which leads to the lowest collateral damage. That might well be an increase in G rather than a tax cut. It will almost certainly be so if there is a financial limit on the size of the stimulus.

The same reasoning can and should be applied to unconventional monetary policy, but that has to be another post.



Monday, 25 August 2014

Austerity, France and Memories

Just a day after ECB President Draghi acknowledges the problems caused by European fiscal consolidation, President Hollande of France effectively sacks his economy minister for speaking out against austerity. There was a key difference of course: Draghi was careful to say that “we are operating within a set of fiscal rules – the Stability and Growth Pact – which acts as an anchor for confidence and that would be self-defeating to break.” In contrast French economy minister Montebourg apparently called for a “major change” in economic policy away from austerity, and complained about “the most extreme orthodoxy of the German right”.

Whatever the politics of what just happened in France, the economic logic is with Montebourg rather than Draghi and Hollande. Once you acknowledge that fiscal consolidation is a problem, you have also to agree that the Stability and Growth Pact (SGP) is also a problem, because that is what is driving fiscal austerity in the Eurozone. The best that Draghi could do to disguise this fact is talk about an “anchor for confidence”, to which the response has to be confidence in what? He must know full well that it was his own OMT that ended the 2010-12 crisis, not the enhanced SGP.

Writing for the Washington Post recently, Matt O’Brien asks didn’t you guys learn anything from the 1930s? That the left in particular appears to ignore these lessons seems strange. In the UK part of the folklore of the left is the fate of Ramsay MacDonald. He led the Labour government from 1929, which eventually fell apart in 1931 over the issue of whether unemployment benefits should be cut in an effort to get loans to stay on the Gold Standard. The UK abandoned the Gold Standard immediately afterwards, but Ramsay MacDonald continued as Prime Minister of a national government, and has been tagged a ‘traitor’ by many on the left ever since.

Not that France needs to look to the UK to see the disastrous and futile attempts to use austerity to stabilise the economy in a depression. By at least one account, the villain in the French case was the Banque de France, which in the 1920s used every means at its disposal to argue the case for deflation in order to return to the Gold Standard at its pre-war parity, and it was instrumental in helping to bring down the left wing Cartel government. When it did rejoin the Gold Standard in 1928, the subsequent imports of gold helped exert a powerful deflationary force on the global economy.

So why has the European left in general, and the French left in particular, not learnt the lessons of the 1920s and 1930s? Why do most mainstream left parties in Europe appear to accept the need to follow the SGP straightjacket as unemployment continues to climb? Perhaps part of the answer lies in more recent memories. After many years in the political wilderness, François Mitterrand was elected President in 1981, and his government became the first left-wing government in 23 years. In the UK and US high inflation was being met with tight monetary policy, but he and his government took a different course, using fiscal measures to support demand, and hoping that productivity improvements that followed would tame inflation. Although the demand stimulus did help France avoid the sharp recession suffered by its neighbours, inflation remained high in 1981 (not helped by increases in minimum wages and other measures that raised costs) and rose in 1982, at a time when inflation elsewhere was falling. The sharp deterioration in the trade balance that followed led to pressure on the Franc, and the government’s fiscal measures were reversed. Economic policy changed course.

To a macroeconomist, this story is very different from today, where Eurozone inflation is 0.4% and French inflation 0.5%. However, the political story of the early 1980s associates fiscal stimulus and demand expansion with ‘socialist policies’, and their failure and abandonment is associated with Mitterrand staying in power until 1995. When the markets again turned on fiscal excess in Greece in 2010, perhaps many on the left thought they would once again have to subjugate their political instincts to market pressure and undertake fiscal consolidation. Unfortunately it was not the 1980s, but events over 50 years earlier, that represented the better historical parallel.


Saturday, 23 August 2014

Draghi at Jackson Hole

To understand the significance of yesterday's speech (useful extract from FT Alphaville here), it is crucial to know the background. The ECB has appeared to be in the past a centre of what Paul De Grauwe calls balanced-budget fundamentalism. I defined this as a belief that we needed fiscal consolidation (austerity) even when we were in a liquidity trap (i.e. interest rates were at or very close to their zero lower bound). Traditionally ECB briefings would not be complete without a ritual call for governments to undertake structural reforms and to continue with fiscal consolidation.

An important point about these calls from the central bank for fiscal consolidation is that they predate the 2010 Eurozone crisis. As I noted in an earlier post, the ECB’s own research found that “the ECB communicates intensively on fiscal policies in both positive as well as normative terms. Other central banks more typically refer to fiscal policy when describing foreign developments relevant to domestic macroeconomic developments, when using fiscal policy as input to forecasts, or when referring to the use of government debt instruments in monetary policy operations.” The other point to note, of course, is that the ECB had in the past always called for fiscal consolidation, whatever the macroeconomic situation.

How can we explain both this obsession with fiscal consolidation, and the ECB’s lack of inhibition in its public statements? I suspect some might argue that the ECB feels especially vulnerable to fiscal dominance - the idea that fiscal profligacy will force the monetary authority to print money to cover deficits. In my earlier post I suggested this was not plausible, because in reality the ECB was less vulnerable in this respect than other central banks. Unfortunately I think the true explanation is rather simpler, and we get an indication from the Draghi speech. There he says:

“Thus, it would be helpful for the overall stance of policy if fiscal policy could play a greater role alongside monetary policy, and I believe there is scope for this, while taking into account our specific initial conditions and legal constraints. These initial conditions include levels of government expenditure and taxation in the euro area that are, in relation to GDP, already among the highest in the world. And we are operating within a set of fiscal rules – the Stability and Growth Pact – which acts as an anchor for confidence and that would be self-defeating to break.”

The big news is the first sentence, which suggests that Draghi does not (at least now) believe in balanced-budget fundamentalism. Instead this speech follows the line taken by Ben Bernanke, who made public his view that fiscal consolidation in the US was not helping the Fed do its job (and who was quite unjustifiably criticised in some quarters for doing so). However note also the second sentence, which clearly implies that the size of the state in Euro area countries is too large. Whether you believe this to be true or not, it is an overtly political statement. I think part of the problem is that Draghi and the ECB as a whole do not see it as such - instead they believe that large states simply generate economic inefficiencies, so calling for less government spending and taxation is similar to calling for other ‘structural reforms’ designed to improve efficiency and growth.

The simple explanation for the ECB’s obsession, until now, with fiscal consolidation is that its members take the neoliberal position as self evident, and that their lack of accountability to the democratic process allows them to believe this is not political.

As a result, it might be possible to argue that the ECB never believed in balanced-budget fundamentalism, but instead kept on calling for fiscal consolidation after the Great Recession through a combination of zero lower bound denial, panic after the debt funding crisis, and a belief that achieving a smaller state remained an important priority. It is hard to believe that members of the ECB, unlike other central banks, were unaware of the substantial literature confirming that fiscal policy is contractionary: there does not seem to be any difference in educational or professional backgrounds between members of the ECB and Fed, for example. 

Should we celebrate the fact that Draghi is now changing the ECB’s tune, and calling for fiscal expansion? The answer is of course yes, because it may begin to break the hold of balanced-budget fundamentalism on the rest of the policy making elite in the Eurozone. However we also need to recognise its limitations and dangers. As the third sentence of the quote above indicates, Draghi is only talking about flexibility within the Stability and Growth Pact rules, and these rules are the big problem.

The danger comes from the belief that the size of the state should be reduced. Whether this is right or not, it leads Draghi later on in his speech to advocate balanced budget cuts in taxes. He says: “This strategy could have positive effects even in the short-term if taxes are lowered in those areas where the short-term fiscal multiplier is higher, and expenditures cut in unproductive areas where the multiplier is lower.” My worry is that in reality such combinations are hard to find, and that what we might get instead is the more conventional balanced budget multiplier, which will make things worse rather than better.  

Friday, 22 August 2014

Types of unemployment

For economists

This post completes a discussion of a new paper by Pascal Michaillat and Emmanuel Saez. My earlier post outlined their initial model that just had a goods market with yeoman farmers, but with search costs in finding goods to consume. Here I want to look at their main model where there are firms, and a labour market as well as a goods market.

The labour market has an identical search structure to the goods market. We can move straight to the equivalent diagram to the one I reproduced in my previous post.



The firm needs ‘recruiters’ to hire productive workers (n). As labour market tightness (theta) increases, any vacancy is less likely to result in a hire. In the yeoman farmer model capacity k was exogenous. Here it is endogenous, and linked to n using a simple production function. Labour demand is given by profit maximisation. Employing one extra producer has a cost that depends on the real wage w, but also the cost of recruiting and hence labour market tightness theta. They generate revenue equal to the sales they enable, but only because by raising capacity they make a visit more likely to result in a sale. The difference between a firm’s output (y) and their capacity (k, now given by the production function and n) the paper calls ‘idle time’ for workers. As y<k, workers are always idle some of the time. So, crucially, profit maximisation determines capacity, not output. Output is influenced by capacity, but it is also influenced by aggregate demand.

Now consider an increase in the aggregate demand for goods, caused by - for example - a reduction in the price level. That results in more visits to producers, which will lead to more sales (trades=output). This leads firms to want to increase their capacity, which means increasing employment. (More employment reduces x, but the net effect of an increase in aggregate demand is higher x, so workers’ idle time falls.) This increases labour market tightness and reduces unemployment.

Here I think the discussion in the paper (bottom of page 28) might be a little confusing. It notes that in fixed price models like Barro and Grossman, in a regime that is goods demand constrained, an increase in demand will raise employment by exactly the amount required to meet demand (providing we stay within that regime). It then says that in their model the mechanism is different, because aggregate demand determines idle time, which in turn affects labour demand and hence unemployment. I would prefer to put it differently. In this model a firm responds to an increase in aggregate demand in two ways: by increasing employment (as in fix price models) but also by reducing worker idle time. The advantage of adding the second mechanism is that, as aggregate demand varies, it generates pro-cyclical movements in productivity. (There are of course other means of doing this, like employment adjustment costs.)

There are additional interesting comparisons with this earlier fixed price literature. In this model unemployment can be of three ‘types’: classical (w too high), Keynesian (aggregate demand too low), but also frictional. This model can also generate four ‘regimes’, each corresponding to some combination of real wage and price. However, unlike the fixed price models, these regimes are all determined by the same set of equations (there are no discontinuities), and are relative to the efficient level of goods and labour market tightness.

For me, this is one of the neat aspects of the model. We do not need to ask whether demand is greater or less than ‘supply’, but equally we do not presume that output is always independent of ‘supply’. Instead output is always less than capacity, just as unemployment (workers actually looking for work) is always positive. One way to think about this is that actual output is always a combination of ‘supply’ (capacity) and demand (visits), a combination determined by the matching function. This is what matching allows you to do. What this also means is that increases in supply in either the goods market (technical progress) or labour market will increase both output and employment, even if prices remain fixed. In Keynesian models additional supply will only increase output if something boosts aggregate demand, but that is not the case here. However, if the equilibrium was efficient before this supply shock, output will be inefficiently low after it unless something happens to increase aggregate demand (e.g. prices fall).

The aggregate demand framework in the model, borrowed from fixed price models, is rather old fashioned, but there is no barrier to replacing it with a more modern dynamic analysis of a New Keynesian type. Indeed, this is exactly what the authors have done in a companion paper

The paper ends with an empirical analysis of the sources of fluctuations in unemployment. It suggests that unemployment fluctuations are driven mostly by aggregate demand shocks. (This is also well covered in their Vox post.) This ties in with the message of Michaillat’s earlier AER paper, where he argued that in recessions, frictional unemployment is low and most unemployment is caused by depressed labour demand. What this paper adds is a goods market where changes in aggregate demand can be the source of depressed labour demand, and therefore movements in unemployment.    



Thursday, 21 August 2014

UK 2015: 2010 Déjà vu, but without the excuses

Things can go wrong when policymakers do not ask the right questions, or worse still ask the wrong questions. Take my analysis of alternative debt reduction paths for the UK following the 2015 elections. There I assumed that the economic recovery would continue as planned, with gradually rising interest rates, achieving 4% growth in nominal GDP each year. I set out a slow, medium and fast path for getting the debt to GDP ratio down, and George Osborne’s plan. On the latter I wrote: “I cannot see any logic to such rapid deficit and debt reduction, so it seems to be a political ruse to either label more reasonable adjustment paths as somehow spendthrift, or to continue to squeeze the welfare state.”

Ah, said some, that is all very well, but you are ignoring what might happen if we have another financial crisis. That will send debt back up again. The implication was that the Osborne plan might make sense if you allowed for this kind of occasional but severe shock. In a subsequent post I showed that this was not the case. However this also illustrates a clear example of asking the wrong question. Rather than setting policy today on the basis of something that might happen in 30+ years time, we should be worrying about much more immediate risks.

The question that should have been asked is what happens if we have a rather more modest negative economic shock in the next five years. The list of possibilities is endless: deflation in the Eurozone, the crisis in Iraq and Syria gets worse, Ukraine blows up, things go wrong in China etc. We can hope that they do not happen, but good macroeconomic policy needs to allow for the fact that they might.

That is the question that was not asked in 2010. The forecast attached to the June 2010 budget didn’t look too bad. GDP growth was between 2% and 3% each year from 2011 to 2015 - not great given the depth of the recession, but nothing too awful. But suppose something unexpected and bad happened, and economic growth faltered. The question that should have been asked is what do we do then. The normal answer would be that monetary policy would come to the rescue, but monetary policy was severely compromised because interest rates were at 0.5%. So 2010 was a gamble - there was no insurance policy if things went wrong. And of course that is exactly what came to pass.

As I have always said, there was an excuse for this mistake. In 2010 there was another risk that appeared to many to be equally serious, and that was that the bond vigilantes would move on from the Eurozone periphery to the UK. This was a misreading of events, but an understandable confusion. By 2011, as interest rates on government debt outside the Eurozone continued to fall, it was clear it was a mistake. Policy should have changed at that point, but it did not - instead we had to wait another year, and then we just got a pause in deficit reduction rather than stimulus.

Today, there is no excuse. There are no bond vigilantes anywhere to be seen. No one, just no one, thinks the UK government will default. This means we are free to choose how quickly we stabilise government debt. However what is very similar to 2010 is monetary conditions. Interest rates may have begun to rise by 2015, but any increase is expected to be slow and modest. So there will again be little scope in the first few years for monetary policy to come to the rescue if things go wrong. A negative demand shock, like another Eurozone recession, will quickly send interest rates to their zero lower bound again, and we will have little defense against this deflationary shock. The tighter is fiscal policy after 2015, the greater the chance that will happen. In that sense, it is just like 2010.

So the right question to ask potential UK fiscal policymakers in 2015 is how will you avoid 2010 happening again? If their answer is to do exactly as we did in 2010 and keep our fingers crossed, you can draw your own conclusions.

Wednesday, 20 August 2014

The symmetry test

Two members of the Bank of England’s Monetary Policy Committee (MPC), Ian McCafferty and Martin Weale, voted to raise interest rates this month. This was the first time any member has voted for a rate rise since July 2011, when Martin Weale also voted for a rate increase. A key factor for those arguing to raise rates now is lags: “Since monetary policy …. operate[s] only with a lag, it was desirable to anticipate labour market pressures by raising bank rate in advance of them.”

The Bank of England’s latest forecast assumes interest rates rising gradually from 2015. It also shows inflation below target throughout. The implication would seem to be that the MPC members who voted for the rate increase do not believe the forecast. But it could also be that they are more worried about risks that inflation will go above target than risks that it will stay below, much as the ECB always appears to be.

I like to apply a symmetry test in these situations. Imagine the economy is just coming out of a sustained boom. Interest rates, as a result, are high. Growth has slowed down, but the output gap is still positive. Unemployment is rising, but is still low (say 4%) and below estimates of the natural rate. Wage inflation is high as a result, and real wages had been increasing quite rapidly for a number of years. Consumer price inflation is above target, and the forecast for inflation in two years time is that it will still be above target.

In these circumstances, would you expect some MPC members to argue that now is the time to start reducing interest rates? Would you expect them to ignore the fact that price inflation is above target, wage inflation is high, the output gap is positive and unemployment is below the natural rate, and discount the forecast that inflation will still be above target in two years time? There is always a chance that they might be right to do so, but can you imagine it happening?

You could? Now can you also imagine large numbers of financial sector economists and financial journalists cheering them on? 

Monday, 18 August 2014

Balanced-budget fundamentalism

Europeans, and particularly the European elite, find popular attitudes to science among many across the Atlantic both amusing and distressing. In Europe we do not have regular attempts to replace evolution with ‘intelligent design’ on school curriculums. Climate change denial is not mainstream politics in Europe as it is in the US (with the possible exception of the UK). Yet Europe, and particularly its governing elite, seems gripped by a belief that is as unscientific and more immediately dangerous. It is a belief that fiscal policy should be tightened in a liquidity trap.

In the UK economic growth is currently strong, but that cannot disguise the fact that this has been the slowest recovery from a recession for centuries. Austerity may not be the main cause of that, but it certainly played its part. Yet the government that undertook this austerity, instead of trying to distract attention from its mistake, is planning to do it all over again. Either this is a serious intention, or a ruse to help win an election, but either way it suggests events have not dulled its faith in this doctrine.

Europe suffered a second recession thanks to a combination of austerity and poor monetary policy. Yet its monetary policymakers, rather than take serious steps to address the fact that Eurozone GDP is stagnant and inflation is barely positive, choose to largely sit on their hands and instead to continue to extol the virtues of austerity. (Dear ECB. You seem very keen on structural reform. Given your performance, maybe you should try some yourself.) In major economies like France and the Netherlands, the absence of growth leads to deficit targets being missed, and the medieval fiscal rules of the Eurozone imply further austerity is required. As Wolfgang Munchau points out (August 15), German newspapers seem more concerned with the French budget deficit than with the prospect of deflation.

There is now almost universal agreement among economists that tightening fiscal policy tends to significantly reduce output and increase unemployment when interest rates are at their lower bound: the debate is by how much. A few argue that monetary policy could still rescue the situation even though interest rates are at their lower bound, but the chance of the ECB following their advice is zero. 

Paul De Grauwe puts it eloquently. 

“European policymakers are doing everything they can to stop recovery taking off, so they should not be surprised if there is in fact no take-off. It is balanced-budget fundamentalism, and it has become religious.”

They still teach Keynesian economics in Europe, so it is not as if the science is not taught. Nor do I find much difference between the views of junior and middle-ranking macroeconomists working for the ECB or Commission compared to, for example, those working for the IMF, apart from a natural recognition of political realities. Instead I think the problem is much the same as that encountered in the US, but just different in degree.

The mistake academics can often make is to believe that what they regard as received wisdom among themselves will be reflected in the policy debate, when these issues have a strong ideological element or where significant sectional financial interests are involved. In reality there is a policy advice community that lies between the expert and the politician, and while some in this community are genuinely interested in evidence, others are more attuned to a particular ideology, or the interests of money, or what ‘plays well’ with sections of the public. Some in this community might even be economists, but economists who - if they ever had macroeconomic expertise - seem happy to leave it behind.

So why does ‘balanced-budget fundamentalism’ appear to be more dominant in Europe than the US. I do not think you will find the answer in any difference between the macro taught in the two continents. Some might point to the dominance of ordoliberalism in Germany, but this is not so very different to the dominance of neoliberalism within the policy advice community in the US. Perhaps there is something in the greater ability of academics in the US (and one in particular) to bypass the policy advice community through both conventional and more modern forms of media. However I suspect a big factor is just recent experience.

The US never had a debt funding crisis. The ‘bond vigilantes’ never turned up. In the Eurozone they did, and that had a scarring effect on European policymakers that large sections of the policy advice community can play to, and which leaves those who might oppose austerity powerless. That is not meant to excuse the motives of those that foster a belief in balanced budget fundamentalism, but simply to note that it makes it more difficult for science and evidence to get a look in. The difference between fundamentalism that denies the concept of evolution and fundamentalism that denies the principles of macroeconomics is that the latter is doing people immediate harm.  

Sunday, 17 August 2014

Why central banks use models to forecast

One of the things I really like about writing blogs is that it puts my views to the test. After I have written them of course, through comments and other bloggers. But also as I write them.

Take my earlier post on forecasting. When I began writing it I thought the conventional wisdom was that model based forecasts plus judgement did slightly better than intelligent guesswork. That view was based in part on a 1989 survey by Ken Wallis, which was about the time I stopped helping to produce forecasts. If that was true, then the justification for using model based forecasting in policy making institutions was simple: even quite small improvements in accuracy had benefits which easily exceeded the extra costs of using a model to forecast.

However, when ‘putting pen to paper’ I obviously needed to check that this was still the received wisdom. Reading a number of more recent papers suggested to me that it was not. I’m not quite sure if that is because the empirical evidence has changed, or just because studies have had a different focus, but it made me think about whether this was really the reason that policy makers tended to use model based forecasts anyway. And I decided it was probably not.

In a subsequent post I explained why policymakers will always tend to use macroeconomic models, because they need to do policy analysis, and models are much better at this than unconditional forecasting. Policy analysis is just one example of conditional forecasting: if X changes, how will Y change. To see why this helps to explain why they also tend to use these models to do unconditional forecasting (what will Y be), let’s imagine that they did not. Suppose instead they just used intelligent guesswork.

Take output for example. Output tends to go up each year, but this trend like behaviour is spasmodic: sometimes growth is above trend, sometimes below. However output tends to gradually revert to this trend growth line, which is why we get booms and recessions: if the level of output is above the trend line this year, it is more likely to be above than below next year. Using this information can give you a pretty good forecast for output. Suppose someone at the central bank shows that this forecast is as good as those produced by the bank’s model, and so the bank reassigns its forecasters and uses this intelligent guess instead.

This intelligent guesswork gives the bank a very limited story about why its forecast is what it is. Suppose now oil prices rise. Someone asks the central bank what impact will higher oil prices have on their forecast? The central bank says none. The questioner is puzzled. Surely, they respond, higher oil prices increase firms’ costs leading to lower output. Indeed, replies the central bank. In fact we have a model that tells us how big that effect might be. But we do not use that model to forecast, so our forecast has not changed. The questioner persists. So what oil price were you assuming when you made your forecast, they ask? We made no assumption about oil prices, comes the reply. We just looked at past output.

You can see the problem. By using an intelligent guess to forecast, the bank appears to be ignoring information, and it seems to be telling inconsistent stories. Central banks that are accountable do not want to get put in this position. From their point of view, it would be much easier if they used their main policy analysis model, plus judgement, to also make unconditional forecasts. They can always let the intelligent guesswork inform their judgement. If these forecasts are not worse than intelligent guesswork, then the cost to them of using the model to produce forecasts - a few extra economists - are trivial.


Saturday, 16 August 2014

Search in the goods market?

For economists

Imagine an economy made up of independent producers, who individually produce some good. Producers each have a fixed ‘capacity’ k of the output they produce. Producers are also consumers, but cannot consume their own good. Instead they search for other goods by visiting other producers. Agents as consumers have a certain demand for goods, which will depend on how much of their own good they sell, as well as some initial endowment of money and the price of goods in terms of money.

Traditionally we ignore the costs for consumers of visiting producers, and we assume that any visit will result in a purchase. As a result, for a given price level, we can have three situations. In the first, aggregate consumption demand is below aggregate capacity (the sum of all k), and producers end up with either unsold goods or idle capacity. In the second, aggregate demand is equal to supply. In the third, aggregate demand is above capacity. In this case we must have rationing of goods.

In this framework output is not always determined by aggregate demand, but only up to some limit. This is not how macroeconomic models typically work - they generally assume output is always equal to aggregate demand. The way New Keynesian models justify this is by assuming that producers can produce above ‘capacity’ (or that they prefer to always have some spare capacity), and that they will be happy to produce above capacity at a given price because they are monopolistic.

A recent paper by Pascal Michaillat and Emmanuel Saez applies the framework of search to the goods market. First, each visit by the consumer is costly (visiting costs) - some of the produced good is ‘lost’ (does not increase utility) as a result. So output (y, the sum of all trades) is greater than consumption (c) because of these visiting costs. Second, a visit may not lead to a trade. Whether it does depends on a matching function, which depends on the ‘tightness’ of the goods market = x, defined as the ratio of visits to capacity. Here is a diagram from their paper.



The consumption demand line is downward sloping, because a larger number of visits raises the effective price of the produced good. The output line is upward sloping, because more visits result in more trade, but the matching function is such that it gets steeper with more visits. However if visiting costs are linear in visits, that implies what the paper calls ‘consumption supply’ has this rather odd shape. (Think about the constant capital line in the Ramsey model.) For a given price, the intersection of the consumption demand and supply lines defines equilibrium tightness. Perhaps a simpler way of putting it is that consumers plan the number of visits they need to make given their consumption demand schedule.

Now shift the consumption demand line outwards, by reducing the price. (In a New Keynesian framework, think about the price as the real interest rate.) The line pivots about the xm point, but output always stays below k. As tightness (number of visits) increases, more resources are used up in failed endeavours to make a trade, and consumption starts falling. Output is always ‘demand determined’, and there is no rationing.

It is still possible to think about different ‘regimes’, because the efficient level of tightness is where consumption is at a maximum. If tightness is below that point, we can say that demand is too low (the price level is too high), and vice versa.

Those familiar with matching models in the labour market will see the connections. Visits are equivalent to vacancies, for example. The key question is whether this transposition to the goods market makes sense, and what it achieves. To quote the authors: “casual observation suggests that a significant share of visits do not generate a trade. At a restaurant, a consumer sometimes need[s] to walk away because no tables are available or the queue is too long.” (What is it with economists and restaurants?!) We could add that this rarely means that consumption is rationed - instead the consumer attempts to make a similar trade at another restaurant. However this does have an opportunity cost, which this model captures.

In a subsequent post, I will look at their full model which has separate goods and labour markets, and the various types of unemployment that this can generate. Those that cannot wait can read their own account on Vox.

  

Friday, 15 August 2014

Conditional and Unconditional Forecasting

Sometimes I wonder how others manage to write short posts. In my earlier post about forecasting, I used an analogy with medicine to make the point that an inability to predict the future does not invalidate a science. This was not the focus of the post, so it was a single sentence, but some comments suggest I should have said more. So here is an extended version.

The level of output depends on a huge number of things: demand in the rest of the world, fiscal policy, oil prices etc. It also depends on interest rates. We can distinguish between a conditional and an unconditional forecast. An unconditional forecast says what output will be at some date. A conditional forecast says what will happen to output if interest rates, and only interest rates, change. An unconditional forecast is clearly much more difficult, because you need to get a whole host of things right. A conditional forecast is easier to get right.

Paul Krugman is rightly fond of saying that Keynesian economists got a number of things right following the recession: additional debt did not lead to higher interest rates, Quantitative Easing did not lead to hyperinflation, and austerity did reduce output. These are all conditional forecasts. If X changes, how will Y change? An unconditional forecast says what Y will be, which depends on forecasts of all the X variables that can influence Y.

We can immediately see why the failure of unconditional forecasts tells us very little about how good a model is at conditional forecasting. A macroeconomic model may be reasonably good at saying how a change in interest rates will influence output, but it can still be pretty poor at predicting what output growth will be next year because it is bad at predicting oil prices, technological progress or whatever.

This is why I use the analogy with medicine. Medicine can tell us that if we eat our 5 (or 7) a day our health will tend to be better, just as macroeconomists now believe explicit inflation targets (or something similar) help stabilise the economy. Medicine can in many cases tell us what we can do to recover more quickly from illness, just as macroeconomics can tell us we need to cut interest rates in a recession. Medicine is not a precise enough science to tell each of us how our health will change year to year, yet no one says that because it cannot make these unconditional predictions it is not a science.

This tells us why central banks will use macroeconomic models even if they did not forecast, because they want to know what impact their policy changes will have, and models give them a reasonable idea about this. This is just one reason why Lars Syll, in a post inevitably disagreeing with me, is talking nonsense when he says: “These forecasting models and the organization and persons around them do cost society billions of pounds, euros and dollars every year.” If central banks would have models anyway, then the cost of using them to forecast is probably no more than half a dozen economists at most, maybe less. Even if you double that to allow for the part time involvement of others, and also allow for the fact that economists in central banks are much better paid than most academics, you cannot get to billions!   

This also helps tell us why policymakers like to use macroeconomic models to do unconditional forecasting, even if they are no better than intelligent guesswork, but I’ll elaborate on that in a later post.


Thursday, 14 August 2014

The risks to the UK recovery are fiscal not monetary

So this is how it is going to go. As the UK recovery proceeds, and rapid employment growth continues, at some point firms will begin to find it difficult to fill jobs. There are few signs (pdf, section 3) of that yet, but it is likely to happen sometime in 2015 or 2016. At that point, real wages will start to rise. Labour scarcity, and the recovery in investment that has already begun, will mean that at some point in the next year or two UK productivity growth will also recover to more normal levels.

What happens to interest rates will depend crucially on the relative timing of these two changes. If productivity increases when real wage growth resumes, wise heads on the MPC will note that cost pressures remain weak. If there are no other inflationary pressures, the case for raising interest rates also remains weak. However if real wages start rising before productivity growth picks up, such that unit labour costs rise, then the MPC will raise rates.

Which will happen is I think anyone’s guess, given the uncertainties associated with the UK productivity puzzle. It may come down to measurement errors in the data. However I also suspect it will not matter a great deal either way. This is because I take the MPC seriously when they say rate increases, when they come, will be small and gradual.

We can speculate about the impact of one or two quarter point increases in interest rates, but I think this would be ignoring the elephant in the room. That is fiscal policy, where its 2010 all over again. We have two austerity programmes: for simplicity call them Labour and Conservative. One is tough, the other is - well let’s just say very tough. Here is a picture.

Alternative Austerity Paths for cyclically adjusted net borrowing (excluding Royal Mail and APF transfers): source OBR and my estimates for Labour

We see the sharp fiscal contraction in 2010 and 2011. Thereafter it eases off. (If we look at the primary balance, which excludes interest payments, the easing off is even more noticeable - see here). Under the current government’s plans, fiscal tightening resumes again in earnest after the election. My guesses for what would happen under Labour are based on their (somewhat vague) statements so far.

In the past I have been a bit dismissive of these government plans, saying they represent a political gambit by Osborne to make Labour look relatively profligate. However that may have been politically naive. After all if the Conservatives win the 2015 election (or are part of a new governing coalition) this will have been achieved having followed a strategy of frontloading austerity. So why change a winning strategy? They might therefore keep to these plans, cut spending and welfare sharply in the first two or three years (more hits on the poor and disabled), and then again ease off, perhaps with tax cuts in the second half of the five year term.

Maybe the UK economy will be luckier than it was after 2010. Perhaps the recovery will be strong enough to shrug off this fiscal contraction, as the US economy has been able to. (Although many will correctly claim that the US recovery has been slower than it might have been as a result.) But the key similarity with 2010 is that UK interest rates will be at or close to their lower bound, so there is no insurance policy if things do go wrong. Just as in 2010, the government will be taking a huge gamble by embarking on a sharp fiscal contraction. The one difference from 2010 is that this time there is no pretext to take such a risk.  

Wednesday, 13 August 2014

Inequality and the common pool problem

One observation from looking at the comments on my post on maximum wages was how many people just considered the impact of this idea on those with high wages, rather than seeing this as involving a redistribution of income.

The classic common pool problem in economics is about how the impact of just one fisherman extracting more fish on the amount of fish in the lake is small, but if there are lots of fishermen doing the same we have a problem. Those thinking about fiscal policy use it to describe the temptation a politician has to give tax breaks to specific groups. Those groups are very grateful, but these tax breaks are paid for (either immediately or eventually) by everyone else paying more tax. However the impact of any specific tax break on the tax of other people is generally so small that it is ignored by these people. As a result, a politician can win votes by giving lots of individual tax breaks, as long as each one is considered in isolation.

Discussion of the minimum wage often focuses on whether the measure is good for the low paid worker (e.g. will they lose their job as a result?). If distributional issues are considered, it generally involves the employer and employee (see for example the case of the Agricultural Wages Board discussed here). Sometimes discussion might stretch to firms doing something that could impact on other workers, like raising prices. However, if changes in the minimum wage have no impact on the overall level of GDP, higher real wages for low paid workers must imply lower real incomes for someone else.

The same logic can be applied to high executive pay, but it is often ignored. Here is part of one comment on my original post that was left at the FT: “the rise in incomes at the very top ... may be a worry in the dining halls of Oxford but in many decades not one person has mentioned such a worry to me. What worries people here, especially those at the bottom of the income distribution, is the decline in real wages …” But if higher executive pay has not led to higher aggregate GDP that pay has to come from somewhere.

Perhaps there is a tendency to think about this in a common pool type way. The impact of high wages for any particular CEO on my own wage is negligible. But that is not true for the pay of the top 1% as a whole. Pessoa and Van Reenen look at the gap between median real wages and productivity growth over the last 40 years in the US and UK. They have a simple chart (page 5) for the UK which is reproduced below. (The legend goes in the opposite direction to the blocks.) I’ll explain this first and then how the US differs.



In the UK over this period median real wages grew by 42% less than productivity. None of that was due to a fall in labour’s share compared to profits - called net decoupling in Figure 1. Most of it was due to higher non-wage benefits - mainly pension contributions in the UK - and rising inequality. There are two obvious differences in explaining the larger (63%) gap between median real wages and productivity in the US: the non-wage benefits were mainly health insurance, and in the US there is some decline in the labour share. However in both counties rising inequality explains a large part of the failure of median real wages to track productivity gains.

Unfortunately the paper does not tell us how much of this increase in inequality is down to the increasing share of the 1%, but a good proportion is likely to be. For example, Bell and Van Reenen find that, in the 2000s in the UK, increases in inequality were primarily driven by pay increases (including bonus payments) for the top few percent. “By the end of the decade to 2008, the top tenth of earners received £20bn more purely due to the increase in their share ... and £12bn of this went to workers in the financial sector (almost all of which was bonus payments).” If that £20bn had been equally redistributed to every UK household, they would have each received a cheque for around £750.

More generally, we can do some simple maths. In the US the share of the 1% has increased from about 8% at the end of the 70s to nearly 20% today. If that has had no impact on aggregate GDP but is just a pure redistribution, this means that the average incomes of the 99% are 15% lower as a result. The equivalent 1% numbers for the UK are 6% and 13% (although as the graph shows, that 13% looks like a temporary downward blip from something above 15%), implying a 7.5% decline in the average income of the remaining 99%.

So there is a clear connection between the rise in incomes at the very top and lower real wages for everyone else. Arguments that try and suggest that any particular CEO’s pay increase does no one any harm may be appealing to a common pool type of logic, and are just as fallacious as arguments that some tax break does not leave anyone else worse off. It is an indication of the scale of the rise in incomes of the 1% over the last few decades that this has had a significant effect on the incomes of the remaining 99%.   

Tuesday, 12 August 2014

Policy Based Evidence Making

I had not heard this corruption of ‘evidence based policy making’ until I read this post by John Springford discussing the Gerard Lyons (economic advisor to London Mayor Boris Johnson) report on the costs and benefits of the UK leaving the EU. The idea is very simple. Policy makers know a policy is right, not because of any evidence, but because they just know it is right. However they feel that they need to create the impression that their policy is evidence based, if only because those who oppose the policy keep quoting evidence. So they go about concocting some evidence that supports their policy.

So how do people (including journalists) who are not experts tell whether evidence is genuine or manufactured? There is no foolproof way of doing this, but here are some indicators that should make you at least suspicious that you are looking at policy based evidence making.

1) Who commissioned the research? The reasons for suspicion here are obvious, but this - like all the indicators discussed here - is not always decisive on its own. For example the UK government in 2003 commissioned extensive research on its 5 tests for joining the EU, but that evidence showed no sign of bias in favour of the eventual decision. In that particular case none of the following indicators were present.

2) Who did the research? I know I’ll get it in the neck for saying this, but if the analysis is done by academics you can be relatively confident that the analysis is of a reasonable quality and not overtly biased. In contrast, work commissioned from, say, an economic consultancy is less trustworthy. This follows from the incentives either group faces. 

What about work done in house by a ‘think-tank’? Not all think tanks are the same, of course. Some that are sometimes called this are really more like branches of academia: in economics UK examples are the Institute for Fiscal Studies (IFS) or the National Institute (NIESR), and Brookings is the obvious US example. They have longstanding reputations for producing unbiased and objective analysis. There are others that are more political, with clear sympathies to the left or right (or for a stance on a particular issue), but that alone does not preclude quality analysis that can be fairly objective. An indicator that I have found useful in practice is whether the think tank is open about its funding sources (i.e. a variant of (1).) If it is not, what are they trying to hide?

3) Where do key numbers come from? If numbers come from some model or analysis that is not included in the report or is unpublished you should be suspicious. See, for example, modelling the revenue raised by the bedroom tax that I discussed here. Be even more suspicious if numbers seem to have no connection to evidence of any kind, as in the case of some of the benefits assumed for Scottish independence that I discussed here.

4) Is the analysis comprehensive, or does it only consider the policy’s strong points. For example, does the analysis of a cut in taxes on petrol ignore the additional pollution, congestion and carbon costs caused by extra car usage (see this study)? If analysis is partial, are there good reasons for this (apart from getting the answer you want), and how clearly do the conclusions of the study point out the consequential bias?

A variant of this is where analysis is made to appear comprehensive by either assuming something clearly unrealistic, or by simply making up numbers. For example, a study may assume that the revenue lost from cutting a particular tax is made up by raising a lump sum tax, even though lump sum taxes do not exist. Alternatively tax cuts may be financed by unspecified spending cuts - sometimes called a ‘magic asterisk budget’.

5) What is the counterfactual? By which I mean, what is the policy compared to? Is the counterfactual realistic? An example might be an analysis of the macroeconomic impact of austerity. It would be unrealistic to compare austerity with a policy where the path for debt was unsustainable. Equally it would be pointless to look at the costs and benefits of delaying austerity if constraints on monetary policy are ignored. (Delaying austerity until after the liquidity trap is over is useful because its impact on output can be offset by easier monetary policy.)

Any further suggestions on how to spot policy based evidence making?