Thursday, 31 July 2014

What are academics good for?

A survey of US academic economists, which found that 36 thought the Obama fiscal stimulus reduced unemployment and only one thought otherwise, led to this cri de coeur from Paul Krugman. What is the point in having academic research if it is ignored, he asked? At the same time I was involved in a conversation on twitter, where the person I was tweeting with asked

“What I have never understood is what is so great about academic economists? Certainly not more objective.”

They also wrote

“Surely, rather a dangerous assumption to think that an academic whose subject is X > a non academic whose subject is X”

In other words, why should we take any more notice of what academic economists say about economics than, well, City economists or economic journalists?

Here is a very good example of why. The statement that the 2013 recovery vindicates 2010 austerity has a superficial plausibility because of the dates (one is before the other) and both involve macroeconomics. However just a little knowledge, or reflection, shows that the statement is nonsense. It is like saying taking regular cold showers is good for curing colds, because everyone who takes them eventually gets better. But the thing is George Osborne says the statement is true, so this is a test of objectivity as well as expertise.

In the Christmas 2013 FT survey of various economists, one question was “Has George Osborne’s “plan A” been vindicated by the recovery?”. Among the academic economists asked, ten said No, and two said Yes. So two gave the wrong answer, but if you knew who they were you would not be surprised. Among City economists surveyed, the split was about 50/50, with at least a dozen giving the wrong answer. Worth remembering that the next time someone says these guys must know what they are talking about because people pay for their advice. (Some do, some do not.)

And journalists? Well, there are some very good ones, particularly those working for newspapers like the Financial Times. Which is why I found the FT leader with the headline “Osborne wins the battle on austerity” so outrageous. If I also tell you the tweets above came from a well known economic journalist, you can see why I found them revealing.

This goes back to the question Paul asked. If we don’t think that academic economists’ opinions about economics are worth anymore than other peoples’ opinions, why do we bother to have academics in the first place? Now of course for some questions an academic economist’s opinions are indeed worth little more than those of anyone else: questions like what will economic growth be in two years time, for example. In fact academic research using models tells us that answering questions like that is almost all guesswork. (Some people find that puzzling, but can a doctor tell you the date on which you will have a heart attack? But if you have a heart attack, you would want a doctor nearby.) And if you want to know what is wrong with your car, you ask a car mechanic not an economist.

And yes of course academic economists cannot all be trusted, and we do make mistakes. (Not all car mechanics can be trusted, and they also make mistakes. But would anyone tweet what is so great about car mechanics when it comes to cars?) But as Paul Krugman quite rightly keeps reminding us, academic macroeconomists have also got some important things right recently: inflation did not take off following Quantitative Easing, interest rates have stayed low despite bigger deficits, and our models said that Eurozone austerity could cause a second recession.

This post so far has seemed far too self serving, but I think this devaluing of academic expertise is not just confined to economics. The obvious comparison is the science of climate change, where the media often appears to give as much weight to paid up apologists for the carbon extraction industry as they do to scientists. When a UK MP and a member of the House of Commons Health Committee and the Science and Technology Committee has “spent 20 years studying astrology and healthcare and was convinced it could work”, it is maybe time to get seriously worried. What is so great about doctors anyway? 


Wednesday, 30 July 2014

Methodological seduction

Mainly for macroeconomists or those interested in economic methodology. I first summarise my discussion in two earlier posts (here and here), and then address why this matters.

If there is such a thing as the standard account of scientific revolutions, it goes like this:

1) Theory A explains body of evidence X

2) Important additional evidence Y comes to light (or just happens)

3) Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)

4) Theory B can explain X and Y

5) After a struggle, theory B replaces A.

For a more detailed schema due to Lakatos, which talks about a theory’s ‘core’ and ‘protective belt’ and tries to distinguish between theoretical evolution and revolution, see this paper by Zinn which also considers the New Classical counterrevolution.

The Keynesian revolution fits this standard account: ‘A’ is classical theory, Y is the Great Depression, ‘B’ is Keynesian theory. Does the New Classical counterrevolution (NCCR) also fit, with Y being stagflation?

My argument is that it does not. Arnold Kling makes the point clearly. In his stage one, Keynesian/Monetarist theory adapts to stagflation, using the Friedman/Phelps accelerationist Phillips curve. Stage two involves rational expectations, the Lucas supply curve and other New Classical ideas. As Kling says, “there was no empirical event that drove the stage two conversion.” I think from this that Paul Krugman also agrees, although perhaps with an odd quibble.

Now of course the counter revolutionaries do talk about the stagflation failure, and there is no dispute that stagflation left the Keynesian/Monetarist framework vulnerable. The key question, however, is whether points (3) and (4) are correct. On (3) Zinn argues that changes to Keynesian theory to account for stagflation were progressive rather than contrived, and I agree. I also agree with John Cochrane that this adaptation was still empirically inadequate, and that further progress needed rational expectations (see this separate thread), but as I note below the old methodology could (and did) incorporate this particular New Classical innovation.

More critically, (4) did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, or in my view the Great Depression either. Yet the NCCR was successful. So why did (5) happen, without (3) and (4)?

The new theoretical ideas New Classical economists brought to the table were impressive, particularly to those just schooled in graduate micro. Rational expectations is the clearest example. Ironically the innovation that had allowed conventional macro to explain stagflation, the accelerationist Phillips curve, also made it appear unable to adapt to rational expectations. But if that was all, then you need to ask why New Classical ideas could have been gradually assimilated into the mainstream. Many of the counter revolutionaries did not want this (as this note from Judy Klein via Mark Thoma makes clear), because they had an (ideological?) agenda which required the destruction of Keynesian ideas. However, once the basics of New Keynesian theory had been established, it was quite possible to incorporate concepts like rational expectations or Ricardian Eqivalence into a traditional structural econometric model (SEM), which is what I spent a lot of time in the 1990s doing.

The real problem with any attempt at synthesis is that a SEM is always going to be vulnerable to the key criticism in Lucas and Sargent, 1979: without a completely consistent microfounded theoretical base, there was the near certainty of inconsistency brought about by inappropriate identification restrictions. How serious this problem was, relative to the alternative of being theoretically consistent but empirically wide of the mark, was seldom asked.   

So why does this matter? For those who are critical of the total dominance of current macro microfoundations methodology, it is important to understand its appeal. I do not think this comes from macroeconomics being dominated by a ‘self-perpetuating clique that cared very little about evidence and regarded the assumption of perfect rationality as sacrosanct’, although I do think that the ideological preoccupations of many New Classical economists has an impact on what is regarded as de rigueur in model building even today. Nor do I think most macroeconomists are ‘seduced by the vision of a perfect, frictionless market system.’ As with economics more generally, the game is to explore imperfections rather than ignore them. The more critical question is whether the starting point of a ‘frictionless’ world constrains realistic model building in practice.

If mainstream academic macroeconomists were seduced by anything, it was a methodology - a way of doing the subject which appeared closer to what at least some of their microeconomic colleagues were doing at the time, and which was very different to the methodology of macroeconomics before the NCCR. The old methodology was eclectic and messy, juggling the competing claims of data and theory. The new methodology was rigorous! 

Noah Smith, who does believe stagflation was important in the NCCR, says at the end of his post: “this raises the question of how the 2008 crisis and Great Recession are going to affect the field”. However, if you think as I do that stagflation was not critical to the success of the NCCR, the question you might ask instead is whether there is anything in the Great Recession that challenges the methodology established by that revolution. The answer that I, and most academics, would give is absolutely not – instead it has provided the motivation for a burgeoning literature on financial frictions. To speak in the language of Lakatos, the paradigm is far from degenerate.  

Is there a chance of the older methodology making a comeback? I suspect the place to look is not in academia but in central banks. John Cochrane says that after the New Classical revolution there was a split, with the old style way of doing things surviving among policymakers. I think this was initially true, but over the last decade or so DSGE models have become standard in many central banks. At the Bank of England, their main model used to be a SEM, was replaced by a hybrid DSGE/SEM, and was replaced in turn by a DSGE model. The Fed operates both a DSGE model and a more old-fashioned SEM. It is in central banks that the limitations of DSGE analysis may be felt most acutely, as I suggested here. But central bank economists are trained by academics. Perhaps those that are seduced are bound to remain smitten.


Tuesday, 29 July 2014

UK Fiscal Policy from 2015 with shocks

One indirect comment I have received on the numbers set out in this post is that they ignore the possibility of major negative shocks hitting the economy. That is not really fair, because a major reason for aiming for such historically low levels of debt to GDP in the long term was to allow for such shocks. However it seems reasonable to ask what sort of shocks these plans might accommodate, so here is an illustration.

A key idea in my paper with Jonathan Portes is that if interest rates are expected to hit the Zero Lower Bound (ZLB), the central bank and fiscal council should cooperate to produce a fiscal stimulus package designed to allow interest rates to rise above that bound. So the key questions become how often such ZLB episodes might occur, and what size of stimulus packages might be required.

The chart below assumes that the next ZLB episode will occur in 2040. Thereafter they occur every 40 years. This is all complete guesswork of course. Each ZLB episode requires a fiscal stimulus package which increases the budget deficit by 10% of GDP in the first year, 10% of GDP in the second, and 5% in the third. For comparison, the Obama stimulus package was worth a little over 5% of GDP. So this is much bigger, but that package was clearly too small, and I’ve also allowed something extra for the automatic stabilisers.

These shocks are superimposed on the ‘medium’ adjustment path that I gave in the previous post. This involves much less austerity than George Osborne’s plans. Whether it is less draconian than the other political parties’ plans is less clear. For example with Labour, there is a commitment to achieve current balance by 2020. To get the total deficit we need to add public investment. Current plans have public investment at around 1.5% of GDP, but if investment was raised to 2.5% of GDP, this would be consistent with the path shown here.

Medium debt reduction path with shocks

So the 2040 crisis starts with debt to GDP at just under 50%, and sends it back up to levels close to but below current levels. In the next crisis the debt to GDP ratio peaks at 50% of GDP. At the turn of the century we settle down to an average of around 30% of GDP, with the ratio never rising above 45%.

With a chart that ends in 2200, many will feel that this is all rather unreal. So perhaps we can compartmentalise discussion into two questions: is this long run average of 30% about right, and are we prepared for the next crisis? Although the 30% figure seems quite prudent by historical standards, our paper does give some reasons why you might want a lower long run average. However this debate really is for the future - it should have no impact on what happens before 2020.

Are we prepared for the next crisis? For the size and timing of the crisis I have chosen my answer would be a clear yes. In this recession UK debt to GDP has risen to higher levels, and there has been no market panic. Political leaders became obsessed with debt for two reasons: misunderstanding the Eurozone crisis (where OMT has clearly demonstrated the nature of the misunderstanding), and because austerity suited other agendas. I am a sufficient optimist to think that another 25 years is long enough to allow most people to figure that out.


Monday, 28 July 2014

If minimum wages, why not maximum wages?

I was in a gathering of academics the other day, and we were discussing minimum wages. The debate moved on to increasing inequality, and the difficulty of doing anything about it. I said why not have a maximum wage? To say that the idea was greeted with incredulity would be an understatement. So you want to bring back price controls was once response. How could you possibly decide on what a maximum wage should be was another.

So why the asymmetry? Why is the idea of setting a maximum wage considered outlandish among economists?

The problem is clear enough. All the evidence, in the US and UK, points to the income of the top 1% rising much faster than the average. Although the share of income going to the top 1% in the UK fell sharply in 2010, the more up to date evidence from the US suggests this may be a temporary blip caused by the recession. The latest report from the High Pay Centre in the UK says:



“Typical annual pay for a FTSE 100 CEO has risen from around £100-£200,000 in the early 1980s to just over £1 million at the turn of the 21st century to £4.3 million in 2012. This represented a leap from around 20 times the pay of the average UK worker in the 1980s to 60 times in 1998, to 160 times in 2012 (the most recent year for which full figures are available).”

I find the attempts of some economists and journalists to divert attention away from this problem very revealing. The most common tactic is to talk about some other measure of inequality, whereas what is really extraordinary and what worries many people is the rise in incomes at the very top. The suggestion that we should not worry about national inequality because global inequality has fallen is even more bizarre

What lies behind this huge increase in inequality at the top? The problem with the argument that it just represents higher productivity of CEOs and the like is that this increase in inequality is much more noticeable in the UK and US than in other countries, yet there is no evidence that CEOs in UK and US based firms have been substantially outperforming their overseas rivals. I discussed in this post a paper by Piketty, Saez and Stantcheva which set out a bargaining model, where the CEO can put more or less effort into exploiting their monopoly power within a company. According to this model, CEOs in the UK and US have since 1980 been putting more bargaining effort than their overseas counterparts. Why? According to Piketty et al, one answer may be that top tax rates fell in the 1980s in both countries, making the returns to effort much greater.

If you believe this particular story, then one solution is to put top tax rates back up again. Even if you do not buy this story, the suspicion must be that this increase in inequality represents some form of market failure. Even David Cameron agrees. The solution the UK government has tried is to give more power to the shareholders of the firm. The High Pay Centre notes that: “Thus far, shareholders have not used their new powers to vote down executive pay proposals at a single FTSE 100 company.”, although as the FT report shareholder ‘revolts’ are becoming more common. My colleague Brian Bell and John Van Reenen do note in a recent study “that firms with a large institutional investor base provide a symmetric pay-performance schedule while those with weak institutional ownership protect pay on the downside.” However they also note that “a specific group of workers that account for the majority of the gains at the top over the last decade [are] financial sector workers .. [and] .. the financial crisis and Great Recession have left bankers largely unaffected.”

So increasing shareholder power may only have a small effect on the problem. So why not consider a maximum wage? One possibility is to cap top pay as some multiple of the lowest paid, as a recent Swiss referendum proposed. That referendum was quite draconian, suggesting a multiple of 12, yet it received a large measure of popular support (35% in favour, 65% against). The Swiss did vote to ban ‘golden hellos and goodbyes’. One neat idea is to link the maximum wage to the minimum wage, which would give CEOs an incentive to argue for higher minimum wages! Note that these proposals would have no disincentive effect on the self-employed entrepreneur. 

If economists have examined these various possibilities, I have missed it. One possible reason why many economists seem to baulk at this idea is that it reminds them too much of the ‘bad old days’ of incomes policies and attempts by governments to fix ‘fair wages’. But this is an overreaction, as a maximum wage would just be the counterpart to the minimum wage. I would be interested in any other thoughts about why the idea of a maximum wage seems not to be part of economists’ Overton window

Sunday, 27 July 2014

Understanding fiscal stimulus can be easy

There seems to be a bit of confusion about fiscal stimulus. I think most people understand what is going on in undergraduate textbook models, but some seem less sure of what might be different in more modern New Keynesian models. This seems to revolve around three issues:

1) In Traditional Keynesian (TK) models any fiscal giveaway seems to work, whereas in New Keynesian (NK) analysis the type of fiscal policy seems to matter much more.

2) Is the dynamics of how policy works different in TK and NK models?

3) In TK models fiscal and monetary policy seem interchangeable, but NK models imply fiscal policy is a second best tool. Why is that?

In this post I will just cover the first two issues.

The best way to answer these questions is to ask how NK models differ from TK models, and where this matters. To keep things simple, let’s just think about a closed economy. I’ll also assume real interest rates are fixed, which switches off monetary policy. This is not quite the same as fiscal policy in a liquidity trap, because expected inflation may change, but that is a complication I want to avoid for now.

First, a difference that does not matter much for (1) and (2). The most basic NK model assumes the labour market clears, while the TK model does not. I tried to explain why that was not critical here.

The difference that really matters is consumption. In TK models consumption just depends on current post tax income, while in the most basic NK model consumption depends on expectations of discounted future income, and expectations are rational. This makes NK models dynamic, whereas in the textbook TK model we do not need to worry about what happens next.

This immediately gives us the best known difference between NK and TK: Ricardian Equivalence. A tax cut today to be financed by tax increases in the future leaves discounted labour income unchanged, and so consumption remains unchanged. However this is only a statement about tax changes. Changes in government spending have much the same impact as they do in TK models.

In particular, if we have a demand gap of X that lasts for Y years, we can fill it by raising government spending by X for Y years, and pay for it by reducing government spending in later years. A practical example of what I call a pure government spending stimulus would be bringing forward public investment. As taxes do not change, then for given real interest rates consumption need not change.

Nick Rowe sets up a slightly different problem, where there is a wedge shaped gap to fill. In that case government spending can initially rise, but then gradually fall back, filling the wedge. Same logic. Nick says that a policy that would work equally well in theory is to initially leave government spending unchanged, but then let it gradually fall, so that it ends up permanently lower. This is not nearly as paradoxical as Nick suggests. By lowering government spending in the long run, taxes will be lower in the long run. Consumers respond by raising consumption now and forever, so it is consumption that fills the gap. It works in theory, but may not in practice because consumers cannot be certain government spending will be lower forever. It is also an odd experiment that combines demand stabilisation with permanently changing the size of the state. So much simpler to do the obvious thing, and raise government spending to fill the demand gap. As fiscal stimulus in a liquidity trap does not require fine tuning, implementation lags are unlikely to be critical.  

So if we restrict ourselves to fiscal changes that just involve changing the timing of government spending, fiscal demand management in NK models works in much the same way as in TK models, which is simple and intuitive. It really is just a matter of filling the gap.


Saturday, 26 July 2014

Why strong UK employment growth could be really bad news

Some of the better reporting and interviews with George Osborne yesterday did try and put the strongish 2014Q2 output growth in context. Yet the much stronger growth in UK employment continues to be greeted by many as unqualified good news - even by some who should know better. So, rather than trying to be satirical, let me attempt to be as clear as I can. Those who already understand the problem can skip the next three paragraphs.

By identity, strong employment growth relative to output growth means a reduction in labour productivity. In the short term when unemployment is above its ‘natural’ (non-inflationary) level, falling labour productivity is good news. It means that a given level of output is being produced by more people, so there are less people unemployed. This is good news because our evidence is that the costs of being unemployed are very high. Of course if more workers are producing the same amount of stuff, their real wages will fall, but that just means that the cost of a recession is being evenly spread rather than being concentrated among the unemployed.

Now lets move on until unemployment has fallen to its natural rate. It is what happens next that is crucial. If labour productivity starts increasingly rapidly, such that we make up all or nearly all of the ground lost over the last five years, that will be fantastic. Rapid productivity growth will bring rapid growth in real wages, meaning that much of the unprecedented fall in real wages we have seen in recent years is reversed. After a decade or so, UK living standards will end up somewhere around where they would have been if there had been no recession. The UK ‘productivity puzzle’ will have been a short term affair that economists can mull over at their leisure. Analysis will not look kindly on the policies that allowed output to be so low for so long, but - hysteresis effects aside - that will be history.

The alternative is that labour productivity does not make up lost ground. If this happens, the average UK citizen will be 15-20% poorer forever following the Great Recession. Living standards in the UK, which before the recession appeared to be growing at least as fast as those in other major established economies, will have fallen back substantially relative to citizens in the US and Europe. This is the alternative that most forecasters, including the OBR (see chart reproduced here), are assuming will happen. 

So the absence of labour productivity growth is good in the short term, but is potentially disastrous in the long term. The problem is that the absence of growth in labour productivity since the recession is unprecedented (see chart below): nothing like this has happened in living memory. The reason to be concerned is that the rapid growth in productivity required to catch up the ground already lost is also unprecedented for the UK, which is why most economists assume it will not happen. Which brings me to another puzzle.



As long as I can remember, UK governments have been obsessed by long term productivity growth, and its level relative to the US, France and Germany. They have put considerable effort into understanding what influences this growth, and what policies can help increase it. This was true when UK labour productivity was steadily increasing at a slightly slower rate than in other countries, or increasing at a slightly faster rate. Given this, you would imagine that the UK government would be frantic to know what was currently going on. Why has UK productivity stalled, why are we falling behind our competitors at such a fast rate?

GDP per hour worked: source OECD

Instead this government seems strangely indifferent. If they have an explanation for the absence of UK productivity growth, I have not seen it. You generally need to understand something before you know what to do about it. Instead the Prime Minister and Chancellor would seem to prefer not to talk about it, because it ‘feeds into’ the opposition’s complaints about low wages. This really is irresponsible. Is it simple arrogance? - they know what is good for the economy, even if they do not understand it. Or is it indifference? - we do not care too much about long term UK prosperity, as long as you keep voting for us. Or is it just too embarrassing to admit that the most calamitous period for UK living standards since the WWII has happened on their watch.

Thursday, 24 July 2014

Synthesis!? David Beckworth's Insurance Policy

Could it be that New Keynesians and Market Monetarists can converge on a common policy proposal? I really like David Beckworth’s Insurance proposal against ‘incompetent’ monetary policy. Here it is.

1) Target the level of nominal GDP (NGDP)

2) “the Fed and Treasury sign an agreement that should a liquidity trap emerge anyhow [say due to central bank incompetence] and knock NGDP off its targeted path, they would then quickly work together to implement a helicopter drop. The Fed would provide the funding and the Treasury Department would provide the logistical support to deliver the funds to households. Once NGDP returned to its targeted path the helicopter drop would end and the Fed would implement policy using normal open market operations. If the public understood this plan, it would further stabilize NGDP expectations and make it unlikely a helicopter drop would ever be needed.”

In fact I like it so much that Jonathan Portes and I proposed something very like it in our recent paper. There we acknowledge that outside the Zero Lower Bound (ZLB), monetary policy does the stabilisation. But we also suggest that if the central bank thinks there is more than a 50% probability that they will hit the ZLB, they get together with the national fiscal council (in the US case, the CBO) to propose to the government a fiscal package that is designed to allow interest rates to rise above the ZLB.

There we did not specify what monetary policy should be, but speaking just for myself I have endorsed using the level of NGDP as an intermediate target for monetary policy, so there is no real disagreement there. A helicopter drop is a fiscal stimulus involving tax cuts plus Quantitative Easing (QE). Again we did not specify that the central bank had to undertake QE as part of its proposed package, but I think we both assumed that it would (outside the Eurozone, where for the moment we can just say it should). I think a central bank could suggest that an income tax cut might not be the most effective form of fiscal stimulus (compared to public investment, for example), but let’s not spoil the party by arguing over that.

Now this does not mean that Market Monetarists and New Keynesians suddenly agree about everything. A key difference is that for David this is an insurance against incompetence by the central bank, whereas Keynesians are as likely to view hitting the ZLB as unavoidable if the shock is big enough. However this difference is not critical, as New Keynesians are more than happy to try and improve how monetary policy works. The reason I wrote this post was not because of these differences in how we understand the world. It was because I thought New Keynesians and Market Monetarists could be much closer on policy than at least some let on. I now think this even more. 



Wednesday, 23 July 2014

Macroeconomic innumeracy

Anthony Seldon is perhaps best known for his biographies of recent UK Prime Ministers. He had a column in the FT recently, which suggested that the Prime Minister’s team had done rather better than popular perception might suggest. Two sentences caught my attention: “Credit for sticking to the so-called Plan A on deficit reduction must be tempered by the government’s reluctance to cut more vigorously” and “Downing Street insiders can claim to have managed to steer…..the recovery of a very battered economy”.

The first sentence suggests that the government stuck to its original 2010 deficit reduction plan, but it should have cut spending by more than this plan. I disagree with the opinion in the second part of the sentence, but that is not the issue here. The problem is that the factual statement in the first part of the sentence is very hard to justify. The numbers suggest otherwise, as Steven Toft sets out here. The second sentence also indicates no acquaintance with the numbers. As the well known (I thought) NIESR chart shows, this has been the slowest UK recovery this century - including those in the 1920s and 1930s. The financial crisis certainly battered the UK, but it also hit the US pretty hard too! Yet average growth 2011-13 in the US was 2.2%, in the UK 1%. The idea that macroeconomic mismanagement left the UK economy in a peculiar mess before the financial crisis is a politically generated myth which is also divorced from the data, as I have argued on a number of occasions.

In one sense it is unfair to single Anthony Seldon out in this respect, because I hear similar mistakes all the time from UK political commentators who profess to be, and may honestly believe they are, objective when it comes to macroeconomic reporting. I suspect the problem is threefold. First, the common feature of these mistakes is that they are repeated endlessly by the government and its supporters. Second, there is group self-affirmation - what Krugman calls ‘Very Serious People’ talk to each other more often than they talk to people acquainted with the data. Third, when some of this group do look for economic expertise, they often talk to ‘experts’ in the City or read the Financial Times. Unfortunately, both sources can and do have their own agendas.

Yet in another sense it is not unfair, because Seldon is a historian, and historians stress the importance of accessing primary sources. The main positive point I want to make is that political commentators need to check the data if they want to avoid making macroeconomic statements that are factually incorrect.      

Tuesday, 22 July 2014

Is economics jargon distortionary?

Employees are already beset by red tape if they try to improve their working conditions. Now the UK government wants to increase the regulatory burden on them further, by proposing that employee organisations need a majority of all their members to vote for strike action before a strike becomes legal, even though those voting against strike action can still free ride on their colleagues by going to work during any strike and benefiting from any improvement in conditions obtained. Shouldn’t we instead be going back to a free market where employees are able to collectively withhold their labour as they wish?

I doubt if you have ever read a paragraph that applies language in this way. Yet why should laws that apply to employers be regarded as a regulatory burden, but laws that apply to employees are not? Labour markets, alongside financial markets, are areas where the concept of a ‘free’ market uncluttered by regulations is a myth. Here, as elsewhere, language has been distorted to suit a neoliberal agenda.

Is this also true with terminology used in academic economics? That is the argument put forward by Charles Manski in this Vox piece in the context of economists’ discussion of taxation and lump-sum taxes. He writes:

“Students of economics learn that the formal usage of the concepts 'inefficiency', 'deadweight loss', and 'distortion' in normative public finance refer to a theoretical setting where a private economy is in competitive equilibrium and a government can use lump-sum taxes to modify the endowments of individuals. In this setting, classical theorems of welfare economics show that any Pareto efficient social outcome can be achieved by having the government use lump-sum taxes to redistribute endowments and otherwise not intervene in the economy. Income taxes and other commonly used taxes logically cannot yield better social outcomes than optimal lump sum taxes but they may do worse. Deadweight loss measures the degree to which they do worse.”

The big problem with this terminology and associated research agenda, he argues, is that it presumes lump sum taxes are a feasible option, whereas in reality they are not.

“The research aims to measure the social cost of the income tax relative to the utterly implausible alternative of a lump-sum tax. It focuses attention entirely on the social cost of financing government spending, with no regard to the potential social benefits.”

Indeed, lump sum taxes (a.k.a. a poll tax) are not a feasible option precisely because they achieve non-distortion at the cost of being unfair, and in the real world taxation is as much about fairness as allocative efficiency.

The counterargument is that the idea of a lump sum tax is just a useful analytical device, which allows research to focus on the taxation side of the balance sheet, without having to worry about what taxes are spent on. It would be equally possible to look at the benefits of different types of government spending, all of which were financed by a lump sum tax. Equally the competitive equilibrium against which real world taxes are distortionary is an imaginary but analytically useful reference point - everyone knows the real world is not like this competitive equilibrium.

It is not our fault, the counter argument would go, that non-academics abuse these analytical devices. No serious economist would talk about the costs and benefits of a policy to cut a particular tax in isolation, when that cut has been financed using a lump sum tax. Governments that do that have clear ulterior motives. Equally no serious economist would talk about the benefits of reducing a tax designed in part as Pigouvian (i.e. a tax designed to offset some market externality), within the context of a model that ignores that externality. (For a recent example where the UK Treasury published a study that managed to do both of these things, see here.)

I think the key here is to clearly differentiate analysis from policy advice. I have used lump sum taxes in my research, and I often talk about taxes being distortionary. I think both general and partial equilibrium analysis is useful, and devices that allow abstraction are invaluable in economics. (I have less sympathy for the concept of Pareto optimality, for reasons discussed here. See also the excellent series of discussions by Steve Randy Waldman.) However these devices can often allow those with an agenda (including the occasional economist) to mislead, which is why economists need to be very careful when presenting their analysis to policy makers, and why they also need to have the means to alert the public when this kind of deception happens. 


Monday, 21 July 2014

Fiscal deceit

Vince Cable, the LibDem minister whose remit includes UK student finance, is apparently having cold feet about the plan to privatise the student loan book. Which is good news, because if ever there was an example of a policy designed to lose money for the public sector (or, as they say in the media, cost the taxpayer more), it was this.

As I explained in this post, if a public asset that generates income is privatised, the public gains the sale value, but loses a stream of future income. The ‘debt burden’ need not be reduced, because although future taxes will fall because there is less debt to pay interest on, they will rise because the government has also lost a future income stream.

With assets like the Royal Mail, we can debate endlessly whether the asset will become more or less efficient under private ownership. If it is more efficient, and therefore profitable, under private ownership, the private sector might be prepared to pay more for it, and so the public sector (and society) is better off selling it – unless of course the government sells it at below its market price! However in the case of the student loan book, it is pretty clear that privatisation is a bad deal for the public sector for two reasons.

First, as Martin Wolf has pointed out, the revenues from student loan repayments are very long term, and pretty uncertain. Any private sector firm that might buy this book is likely to discount these revenues quite highly, and so will not be prepared to pay the government enough to compensate the government for the lost revenue. Second, as Alasdair Smith points out, the main efficiency issue is collecting the loan repayments. Here the government has clear advantages over the private sector, because loan repayments are linked to income, and the government has all the information on people’s income, and an existing system for collecting money based on income.

So selling the student loan book is an almost certain way of increasing the ‘debt burden’ on current and future generations. As Alasdair Smith reports, George Osborne justified the sale by saying that it helped the government with a ‘cash flow issue’. As Alasdair rightly says, the government does not have cash flow issues. This kind of ludicrous policy either comes from ideological fundamentalism (the government shouldn’t own assets) or the need to meet ridiculously tough deficit targets. Whichever it is, every UK citizen loses money as a result.

George Osborne is hardly the first finance minister to play tricks like this, so how do we stop future governments from doing the same? I’m glad to see more journalists, like Chris Cook, making the points I make here. However it would be better still if an independent body, set up by the government to calculate its future fiscal position, was charged with a statutory duty to make these points. At present the OBR does not have that duty, and it feels naturally reluctant to go beyond its remit and pick fights with the government. However, if it became more of a public watchdog, with a remit to flag government proposals that appeared to lose money for the public sector in the long term, that might just stop future governments doing this kind of thing.

Sunday, 20 July 2014

What annoys me about market monetarists

I missed this little contretemps between Nick Rowe and Paul Krugman. Actually this appears to be a fuss over nothing. The main point Paul was trying to make, it seemed to me, was about how far the Republican base were on monetary policy from anything reasonable, and so what he called the neomonetarist movement did not have much chance with this group. By implication, neomonetarism was something more reasonable, although he had well known problems with its ideas. So a sort of backhanded compliment, if anything.

Nick responded by pointing out that what he called neofiscalists (those, like me, who argue for fiscal stimulus at the Zero Lower Bound (ZLB)) hadn’t done too well at finding a political home recently either. Which, alas, is all too true, but I think we kind of knew that.

What interests me is how annoyed each side gets with each other. Following my earlier posts, I will use the label market monetarist (MM) rather than neomonetarist. It seems to me that I understand a little why those in the MM camp get so annoyed with those like me who go on about fiscal policy. Let me quote Nick:

“We don't like fiscal patches that cover up that underlying problem. Because fiscal policy has other objectives and you can't always kill two birds with the same fiscal stone. Because we can't always rely on fiscal policymakers being able and willing to do the right thing. And because if your car has alternator trouble you fix the alternator; you don't just keep on doing bodge-jobs like replacing the battery every 100kms.”

I’ll come back to the car analogy, but let me focus on the patches idea for now. In their view, the proper way to do stabilisation policy outside a fixed exchange rate regime is, without qualification, to use monetary policy. So the first best policy is to try every monetary means possible, which may in fact turn out to be quite easy if only policymakers adopt the right rule. Fiscal policy is a second best bodge. MM just hates bodgers.

As I explained in this post, the situation is not symmetric. I do not get annoyed with MM because I think monetary policy is a bodge. I have spent much time discussing what monetary policy can do at the ZLB, and I have written favourably about nominal GDP targets. But, speaking for just myself, I do get annoyed by at least some advocates of MM.

Before I say why, let me dismiss two possible reasons. First, some find MM difficult because there does not seem to be a clear theoretical model behind their advocacy (see this post from Tony Yates for example). I can live with that, because I suspect I can see the principles behind their reasoning, and principles can be more general than models (although they can also be wrong). Second, I personally would have every right to be annoyed with some MMs (but certainly not Nick) because of their debating style and lack of homework, but I see that as a symptom rather than fundamental.

To understand why I do get annoyed with MM, let me use another car analogy. We are going downhill, and the brakes do not seem to be working properly. I’m sitting in the backseat with a representative of MM. I suggest to the driver that they should keep trying the brake pedal, but they should also put the handbrake on. The person sitting next to me says “That is a terrible idea. The brake pedal should work. Maybe try pressing it in a different way. But do not put on the handbrake. The smell of burning rubber will be terrible. The brake pedal should work, that is what it is designed for, and to do anything else just lets the car manufacturer off the hook. Have you tried pressing on the accelerator after trying the brake?”

OK, that last one is unfair, but you get my point. When you have a macroeconomic disaster, with policymakers who are confused, conflicted and unreliable, you do not obsess over the optimal way of getting out of the disaster. There will be a time and place for that later. Instead you try and convince all the actors involved to do things that will avoid disaster. If both monetary and fiscal policymakers are doing the wrong thing given each other’s actions, and your influence on either will be minimal, you encourage both to change their ways.

MM agrees that fiscal stimulus will work unless it is actively counteracted by monetary policy. Nick says we can't always rely on fiscal policymakers being able and willing to do the right thing. But since at least 2011 we have not been able to rely on monetary policymakers in the Eurozone to do either the right thing, or consistently the wrong thing. So why is anyone with any sense saying that austerity is not a major factor behind the second Eurozone recession? That is just encouraging fiscal policymakers to carry on doing exactly the wrong thing, in the real world where monetary policy is set by the ECB rather than some MM devotee.


Saturday, 19 July 2014

A short note on tobacco packaging

About a year ago I published a post that was off my macro beat, about whether banning advertising was paternalistic or freedom enhancing. It was prompted by the UK government appearing to kick the idea of enforcing ‘plain packaging’ of cigarettes into the long grass. Subsequently the government seemed to change its mind, and asked paediatrician Sir Cyril Chantler to review the Australian experience, where plain packaging had been introduced more than a year earlier. In April this year the UK government announced that it would go ahead with plain packaging, after a ‘short consultation’.

The standard argument against actions of this kind is that they are paternalistic. Most economists are instinctively non-paternalistic, although personally I think paternalism can be justified in a small number of cases, like the compulsory wearing of seat belts. Furthermore, I think as behavioural economics progresses, economists are going to find themselves becoming more and more paternalistic whether they like it or not.

However my argument on advertising was rather different. Most advertising is not ‘on-demand’: we have to go out of our way to avoid it. Examples would be television advertising, magazine advertising or billboard advertising. A lot of advertising also has no informational content, but instead tries to associate some brand with various positive emotions - a mild form of brainwashing. A ban on this kind of advertising enhances our freedom, making it less costly to avoid being brainwashed. Banning advertising allows us to avoid unwanted intrusion by advertising companies. It enhances rather than detracts from our freedom. Of course it restricts the freedom of companies, but companies are not people.

What appears on a packet of cigarettes is different, because it is ‘on-demand’ - only those buying the product view it. However it is almost invariably of the non-informative kind. In contrast, ‘plain packaging’ is actually informative, about the health risks being faced by the smoker. So in this case, the smoker receives more information under plain packaging, so will be better off. Arguments by the industry that this represents a ‘nanny state’ are nonsense, and are akin to potential muggers arguing that policemen represent a gross violation by the state of the rights of the mugger.

The UK decided in April to adopt plain packaging because the evidence from Australia was that it was having a positive impact. More recently, the Financial Times reports that the latest National Drugs Strategy Household Survey shows a sharp decline not only in the number of cigarettes smoked per week, but also a large rise in the age at which young people smoke their first cigarette. (The cigarette industry and their apologists argue that smoking has in fact increased as a result of the ban, so strangely they are against it!)

This shows how in at least one respect Australia is helping lead a global improvement in peoples’ lives. Alas the new Australian government has also just abolished their carbon tax, which unfortunately means we need to be selective in following an Australian example!
  

Friday, 18 July 2014

Further thoughts on Phillips curves

In a post from a few days ago I looked at some recent evidence on Phillips curves, treating the Great Recession as a test case. I cast the discussion as a debate between rational and adaptive expectations. Neither is likely to be 100% right of course, but I suggested the evidence implied rational expectations were more right than adaptive. In this post I want to relate this to some other people’s work and discussion. (See also this post from Mark Thoma.)

The first issue is why look at just half a dozen years, in only a few countries. As I noted in the original post, when looking at CPI inflation there are many short term factors that may mislead. Another reason for excluding European countries which I did not mention is the impact of austerity driven higher VAT rates (and other similar taxes or administered prices), nicely documented by Klitgaard and Peck. Surely all this ‘noise’ is an excellent reason to look over a much longer time horizon?

One answer is given in this recent JEL paper by Mavroeidis, Plagborg-Møller and Stock. As Plagborg-Moller notes in an email to Mark Thoma: “Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature….traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.” This had been my reading based on work I’d seen.

This is often going to be the case with time series econometrics, particularly when key variables appear in the form of expectations. Faced with this, what economists often look for is some decisive and hopefully large event, where all the issues involving specification uncertainty can be sidelined or become second order. The Great Recession, for countries that did not suffer a second recession, might be just such an event. In earlier, milder recessions it was also much less clear what the monetary authority’s inflation target was (if it had one at all), and how credible it was.

How does what I did relate to recent discussions by Paul Krugman? Paul observes that recent observations look like a Phillips curve without any expected inflation term at all. He mentions various possible explanations for this, but of those the most obvious to me is that expectations have become anchored because of inflation targeting. This was one of the cases I considered in my earlier post: that agents always believed inflation would return to target next year. So in that sense Paul and I are talking about the same evidence.

Before discussing interpretation further, let me bring in a paper by Ball and Mazumder. This appears to come to completely the opposite conclusion to mine. They say “we show that the Great Recession provides fresh evidence against the New Keynesian Phillips curve with rational expectations”. I do not want to discuss the specific section of their paper where they draw that conclusion, because it involves just the kind of specification uncertainties that Mavroeidis et al discuss. Instead I will simply note that the Ball and Mazumder study had data up to 2010. We now have data up to 2013. In its most basic form, the contest between the two Phillips curves is whether underlying inflation is now higher or lower than in 2009 (see maths below). It is higher. So to rescue the adaptive expectations view, you have to argue that underlying inflation is actually lower now than in 2009. Maybe it is possible to do that, but I have not seen that done.

However it would be a big mistake to think that the Ball and Mazumder paper finds support for the adaptive expectations Friedman/Phelps Phillips curve. They too find clear evidence that expectations have become more and more anchored. So in this sense the evidence is all pointing in the same way.

So I suspect the main differences here come from interpretation. I’m happy to interpret anchoring as agents acting rationally as inflation targets have become established and credible, although I also agree that it is not the only possible interpretation (see Thomas Palley and this paper in particular). My interpretation suggests that the New Keynesian Phillips curve is a more sensible place to start from than the adaptive expectations Friedman/Phelps version. As this is the view implicitly taken by most mainstream academic macroeconomics, but using a methodology that does not ensure congruence with the data, I think it is useful to point out when the mainstream does have empirical support.


Some maths

Suppose the Phillips curve has the following form:

p(t) = E[p(t+1)] + a.y(t) + u(t)

where ‘p’ is inflation, E[..] is the expectations operator, ‘a’ is a positive parameter on the output gap ‘y’, and ‘u’ is an error term. We have two references cases:

Static expectations: E[p(t+1)] = p(t-1)

Rational expectations: E[p(t+1)] = p(t+1) + e(t+1)

where ‘e’ is the error on expectations of future inflation and is random. Some simple maths shows that under static expectations, negative output gaps are associated with falling inflation, while under rational expectations they are associated with rising inflation. If we agree that between 2009 and today we have had a series of negative output gaps, we just need to ask whether underlying inflation is now higher or lower than in 2009. 



Thursday, 17 July 2014

Public Investment and Borrowing Targets

Often fiscal rules, designed to keep a lid on public deficits or debt, exclude borrowing for public investment from any deficit target. This is true of the UK government’s fiscal mandate, which seeks to achieve a cyclically-adjusted current budget balance within five years. The idea, in simple language, is to only borrow to invest. What could be wrong with that?

Most of the time public investment is not like private investment. A successful private investment will generate future income which can pay back any borrowing. A successful public investment project may raise future output, and this may increase future taxes, but there is no sense in which we would only undertake the project if we could be sure of paying off the borrowing with these extra taxes. A public investment project should be undertaken if discounted future social benefits exceed its costs. This cost has to be paid for by higher taxes at some point, so the question is simply when taxes will increase to do so.

In thinking about when to raise taxes, the obvious principle is tax smoothing. If taxes are distortionary, it is better to spread the pain. So if we need some additional public spending for just this year, one way to pay for it is to borrow, and use higher taxes just to pay the interest on that borrowing. That smooths the distortion over time. This is true whether the public spending involves consumption or investment. In contrast, if we are planning to raise public spending permanently, taxes should be raised by the amount of the increase in spending, and no borrowing should take place. Again this is true whatever the form of the additional expenditure. Now it is true that public investment projects tend to be temporary, while additional public consumption can be permanent, but the principle here is how taxes are distributed, rather than the nature of the spending.

This simple application of tax smoothing takes no account of distributional issues. If we believe that government consumption only benefits those paying taxes at that time, we might want taxes to rise with a temporary increase in government consumption rather than being smoothed. Why should future generations pay for the consumption enjoyed by the current generation? Here public investment would be different if it benefits both current and future generations. So from a distributional point of view, it might make sense to treat government consumption and investment separately. There are two problems here though. The first is that the distinction between public investment and consumption in the statistics does not necessarily follow this distributional logic. Education is classed as consumption. Second, how in practical terms do you allocate taxes paid to benefits received from public investment? (I touch on this here.)

One of the key points that Jonathan Portes and I stress in our discussion of fiscal rules is that rules have to balance optimality when governments are benevolent against effectiveness when they are not. One feature of periods of austerity is that public investment often gets hit hard. The reason this happens may also reflect intergenerational issues. To the extent that public investment benefits future generations, they are unable to complain when it is cut.

This can be one reason why rules sometimes use current balance targets rather than targets for the overall deficit. If public investment does not influence the target, it need not be cut. (This does not seem to have worked with George Osborne, as the victims of flooding found out!) However such rules are inevitably incomplete, because they say nothing about the overall level of public debt. In the case of the last Labour government, there were two rules: one involving the current balance over the cycle (only borrow to invest), and one specifying a total debt ceiling. There was an implicit target for public investment implied by the conjunction of the two rules, but it is unclear how sensible that implicit target was.

Jonathan and I suggest that the simpler and perhaps most effective way of preventing public investment being squeezed in times of austerity is to have a specific target for the share of public investment in GDP. Of course this target should also influence any overall deficit target, but if you want to protect public investment, it seems best to do so explicitly. If you do that, then it makes more sense to have just one target for the overall deficit (primary or total) that includes borrowing to invest, rather than a target for just the current balance.


Wednesday, 16 July 2014

French macroeconomic policy improvisation

I’m confused about macroeconomic policy under François Hollande. When he came to power in 2012 he made deficit reduction a priority. The chance to lead some opposition to the dominant policy of austerity was lost. However where French policy did seem to differ from some other Eurozone countries was that tax increases rather than spending cuts would play a prominent role in deficit reduction. As I noted in this post, the Commission’s austerity enforcer, Olli Rehn, was not pleased.

However policy in France now seems to have taken a rather different turn. In January Hollande announced cuts to social charges paid by business. Many outside comments declared that this was a move ‘to the centre’. His speech also seemed to imply that he had become a convert to Say’s Law. But maybe there was a more modern logic to this policy: by reducing employment costs, perhaps the government was trying to engineer an ‘internal devaluation’.

Yet more recently, Hollande has appeared to pledge tax cuts to middle class voters. With non-existent growth and a rising budget deficit, the macroeconomic logic behind this policy escapes me. Many taxpayers will quite reasonably assume that any tax cuts will turn out to be temporary and will therefore save a good proportion of them, so the impact on demand will be weak compared to the cuts in public spending required to pay for them. A deflationary balanced budget cut in spending is the last thing you want with an estimated negative output gap of 3% or more. On a more positive note, he also appears to be trying to form alliances to loosen the eurozone fiscal straightjacket, although what success he will have remains to be seen.


The latest OECD forecast predicts a gradual pickup in growth, despite a sharp fiscal contraction, although this fiscal contraction is not enough to stabilise the debt to GDP ratio by 2015. The danger is the by now familiar one: that fiscal contraction will inhibit growth by more than forecasters expect, which will generate pressure to undertake additional fiscal contraction. Is there a clear strategy to avoid this outcome, or is Thomas Piketty correct when he says: "What saddens me is the ongoing improvisation of François Hollande.”