Winner of the New Statesman SPERI Prize in Political Economy 2016


Sunday 30 September 2012

Active and Passive at the ECB


I’ve been away and busy at the IMF, so did not respond immediately to this speech from ECB Executive Board member Benoît Cœuré on the new OMT policy. In econblog terms, the speech would be described as wonkish, but I think the ideas I want to focus on are reasonably intuitive. They are worth exploring, because they illuminate the key issue of conditionality.

In a classic paper, Eric Leeper distinguished between active and passive monetary and fiscal policies, within the context of simple policy rules. The concept of an active monetary policy is by now familiar: monetary policy should ensure that real interest rates rise following an increase in inflation, so that higher real interest rates deflate demand and put downward pressure on inflation. Leeper’s use of active and passive for fiscal policy is a little counterintuitive. A passive fiscal policy is where, following an increase in debt, taxes rise or spending falls by enough to bring debt back to some target level. If neither taxes nor spending respond to excess debt, debt would gradually explode as the government borrowed to pay the interest on the extra debt. This is the extreme case of what Leeper calls an active fiscal policy.

Now you might be forgiven in thinking that the only policy combination that would bring stability to the economy was an active monetary policy (to control inflation) and a passive fiscal policy (to control debt). This would correspond to what I have called the consensus assignment. However Leeper showed that there was another: an active fiscal policy combined with a passive monetary policy. A simplified way of thinking about this is that it represents the opposite of the consensus assignment: fiscal policy determines inflation and monetary policy controls debt, because debt becomes sustainable by being reduced through inflation. This idea, which became known as the Fiscal Theory of the Price Level (FTPL)[1], is very controversial. (For once, divisions cut across ‘party’ lines, with John Cochrane and Mike Woodford both contributing to the FTPL.) However for current purposes you can think of the FTPL policy combination as being a form of fiscal dominance. You can also think of this combination as being inferior to the consensus assignment from a social welfare perspective (see this post).

So why did Cœuré invoke Leeper’s definitions of active and passive in his speech? To quote:

“central bank independence and a clear focus on price stability are necessary but not sufficient to ensure that the central bank can provide a regime of low and stable inflation under all circumstances – in the economic jargon, ensuring “monetary dominance”. Maintaining price stability also requires appropriate fiscal policy. To borrow from Leeper’s terminology, this means that an “active” monetary policy – namely a monetary policy that actively engages in the setting of its policy interest rate instrument independently and in the exclusive pursuit of its objective of price stability – must be accompanied by “passive” fiscal policy.

Now OMT involves the ECB being prepared to buy government debt in order to force down interest rates so that fiscal policy becomes sustainable. To some that seems like fiscal dominance: monetary policy is being used in a similar way to the FTPL, in order to make debt sustainable. Cœuré wants to argue that with OMT we can get back to the consensus assignment, because OMT will allow fiscal policy to become passive again.

Now current fiscal policy in the Eurozone can hardly be described as ignoring government debt, as in the polar case of active fiscal policy outlined above. However, for fiscal policy to be passive it has to counteract the tendency for debt to explode because of debt interest payments. If interest rates are very high, because of default risk, this may require destructive and perhaps politically impossible rates of fiscal correction. In other words, default risk forces fiscal policy to be active. Although this problem is just confined to one part of the Eurozone, as Campbell Leith and I showed here, this is sufficient to force monetary policy to become passive if it wants to preserve stability.

I think this is a very clever way of describing OMT to those who believe this policy goes beyond the ECB’s remit. OMT is necessary to allow fiscal policy to become passive in countries subject to significant default risk, and therefore for monetary policy to ensure price stability. The argument, like the FTPL, is controversial: many of those who dislike the FTPL would argue that an active monetary policy is sufficient to ensure price stability. This analysis also ignores the problem of the Zero Lower Bound (ZLB) for nominal rates, which one could reasonably argue forces monetary policy to become passive.  For both reasons I did not use this argument in my post on conditionality, but without length constraints I would have.

I want to make two final points which Cœuré does not. First, a feature of passive fiscal policy at ‘normal’ (largely default risk free) levels of real interest rates is that debt correction does not have to be very rapid, and as Tanya Kirsanova and I show here, it should not be very rapid. Almost certainly the speed of debt correction currently being undertaken in periphery countries is more rapid than it needs to be to ensure a passive fiscal policy at normal interest rates. The current Eurozone fiscal rules also probably imply adjustment that is faster than necessary. As a result, no additional conditionality is required before the ECB invokes OMT. Second, this analysis ignores the problem of the ZLB, which is as acute for the Eurozone as it is elsewhere. Cœuré says that OMT is not Quantitative Easing (QE), but does not explain why the ECB is not pursuing QE. It has taken the ECB about two years too long to recognise the need for OMT – let’s hope that it does not take another two before it realises that for monetary policy to stay active in the sense described above, it also needs QE.  



[1] The Wikipedia entry on the FTPL is poor.

Sunday 23 September 2012

Macroeconomists: Scientists or Engineers?


Some comments on my earlier posts discussing microfoundations have mentioned Mankiw’s well known paper on the macroeconomist as scientist or engineer? This is something I’ve also wondered about, so I went back to reread the paper. Mankiw writes:

“My premise is that the field has evolved through the efforts of two types of macroeconomist—those who understand the field as a type of engineering and those who would like it to be more of a science. Engineers are, first and foremost, problem-solvers. By contrast, the goal of scientists is to understand how the world works. The research emphasis of macroeconomists has varied over time between these two motives.”

Mankiw tries to relate this distinction to the debate between Keynesian and New Classical economists. When I first read the paper, and even more so now, I find this link ultimately unconvincing. The attempt to cast Keynesians as engineers and New Classicals as scientists requires too many qualifications: there is probably something there, but it does not seem to be the critical distinction. His argument that New Keynesian theory has had little influence on policy turned out to be premature, as he himself anticipated in his conclusions.

But suppose we use the engineer/scientist dichotomy and apply it to microfoundations, rather than particular schools of thought – will that work? Microfoundations macro is the science, while those pursuing different approaches are the engineers. Is the problem with academic macro at the moment that we have too many who think they have to be scientists, and too few who try to be engineers?

I really have not made my mind up about this. One of the motivations I give in my earlier post listing reasons for departing from microfoundations focuses on policy. We have some empirical finding that at present has no clear microfoundation, but policy cannot wait for theory to develop that microfoundation. We need to explore its macroeconomic implications now. I give price rigidity as a retrospective example – it took at least a decade to develop New Keynesian theories. So we can imagine those involved with policy as the engineers, busy looking at the implications of some empirical finding, while those exploring its microfoundations are the scientists.

Certainly I have found the distinction between scientist and engineer often has resonance with macroeconomists in policy making institutions. They feel the urgency of the problem, and probably have less concern about some of the seemingly more esoteric issues that might be raised in an academic seminar. My post suggesting that the core models used by central banks should not be DSGE models ties in with this. In that context I like the scientist/engineer dichotomy, because I don’t think central banks have the resources to develop alternative modelling frameworks alone. Although we find plenty of engineers outside academic departments, we also need engineers who are academics.

But then, would it make sense to have distinct departments, of pure and applied macroeconomics? Obviously not because there are not enough of us to go round, but isn’t there something more fundamental in that observation? I write papers that build and simulate DSGE models, but I have in the past built models that are not fully microfounded, and the only thing stopping me doing both at the same time is time itself and that a major part of my job is to publish in academic journals. More importantly, in which department would you put Michael Woodford (at least when he writes Jackson Hole papers)?

This isn’t really about people as about ideas. I certainly think that anyone building non-microfounded models needs to have a thorough knowledge of microfounded models to do it well. (That would go for heterodox economists as well.) But equally, I think it is difficult to build good microfounded models without having a good knowledge of the empirical evidence. This is not just a matter of selecting an appropriate puzzle: I think a good deal of the microfoundations game is about selecting particular ‘tricks’ that allow these models to get closer to the real world, as I suggested here. While I can think of some macroeconomists whose productivity is largely unrelated to what is happening in the real world, and others – particularly in policy institutions - who do good stuff without ever seriously thinking about microfoundations, I think the majority of academic macroeconomists need to do both. In other words, to be good scientists we need to be engineers, and to be good engineers we also need to be scientists. 

Friday 14 September 2012

Denial, and Bernanke the Brave


Someone teased me about my recent post on Zero Lower Bound denial – can any disagreement on theory or evidence now be labelled by either side as denial? I admit that one of the negative aspects of blogging (for me at least) is that it can lead to an indecent attraction to neat catchphrases and rhetorical flourishes. However there was some method in my madness.

What I take as the characteristic of denial, whether it’s about climate change or macroeconomics, is that a strong position is taken not on the merits of the case, but because of a dislike of some  implication of accepting the proposition being denied. So climate change denial would come not from a consideration of the evidence, but distaste for the idea that it is an externality that requires government intervention to correct. Evidence that this form of denial exists comes from the clear correlation between climate change denial and a pro-market ideology (see Oreskes and Conway, 2010). In macroeconomics, demand denial may also come from dislike of the idea that state intervention (monetary policy) is required to ensure the aggregate economy stays on its efficient path, which is why those that keep suggesting our current problems are not about deficient demand also tend to come from the political right. We could call this bias or wishful thinking rather than denial, but the key point is that evidence on the issue appears to have no impact because the source of the belief lies elsewhere.

In the case of central banks, zero lower bound (ZLB) denial would be a belief that ‘unconventional’ monetary policy is almost as effective or reliable as conventional policy not on the merits of the case, but because the central bank must be seen to be in control. Now no one wants to hear a pilot tell passengers that they are no longer in control of the plane. However a better analogy here would be the pilot not telling the co-pilot, because fiscal policy can also be used to stabilise the economy. In the case of perhaps some economists, ZLB denial may be due to an aversion to the use of fiscal demand stabilisation, and hence the need to talk up the effectiveness of actual or potential monetary policy actions or regimes. But it was the central bank case that I really had in mind: hence my use of the quote from the Woodford paper about central bank wishful thinking.

To take the UK example, I have previously discussed (towards the end of this post) a speech made by George Osborne that stated that monetary policy was all you needed for stabilisation and appeared to ignore the ZLB, at the very moment that UK interest rates hit 0.5%. Now if I had been Governor of the Bank of England at the time, I would have said in private at least that the world had changed, and I was no longer sure I could achieve the Bank’s mandate. Perhaps this was said, and maybe one day we will find out. It was not the stance the Bank took in public.

Of course, my use of the term denial also signifies that I believe the idea that unconventional monetary policy is somehow on a par with fiscal policy in its reliability and effectiveness is fairly weak. I want to make two points here. First, I think it is important to distinguish between targets and instruments. My previous post asserted that nominal GDP targets in a world where unconventional monetary policy was ineffective would only reduce the impact of the ZLB, and not eliminate it. I think this is a useful case to consider because it helps clarify ideas: I still read people saying that nominal GDP targets in themselves would mean that large negative demand shocks would be less likely to take us to the ZLB, and I do not understand why this is.
               
Second, the key issue for me is not whether QE will or will not have some significant effect. My suspicion is that it can, and Woodford’s baseline case takes too idealised view of financial markets.[1] However our lack of knowledge about the extent of the deviation from Woodford’s idealised view mean that these effects appear of an order more uncertain than the impact of fiscal policy, which is why you have to use the latter, and why austerity during a recession is foolish.

Which brings me to the recent Bernanke decision. If the FOMC were ever in denial, they are no longer. What I think is significant about this decision is that it combines more QE with a suggestion that the Fed may at some point in the future allow inflation above target. I am often asked how we know that QE will be temporary – that central banks will claw back the reserves they have created when the economy recovers – and the easy answer is that they remain committed to their inflation targets. For central banks that have used QE, being seen to be totally committed to inflation targets is their way of showing that they are not permanently monetising government debt. Ironically I wonder whether QE has made central banks less willing to consider changes in the targeting regime, because doing so at the same time as printing lots of money would look too much like the prelude to significant monetisation. In that sense Bernanke and the FOMC have crossed an important psychological barrier, and hopefully the world will be better for it.


[1] See for example Gagnon on markets in general, and Julian Janssen in a direct comment, and David Glasner here in relation to the FOREX market – see also this evidence. On selling currency to engineer depreciation there is of course the obvious point that this would not help a global economy at the ZLB, and as a result would not be received well by other countries.

Wednesday 12 September 2012

Why not finance fiscal stimulus by printing money?


This is a friendlier version of an earlier post that was marked for economists.

Q: Everyone keeps saying that the government cannot boost the economy by increasing its spending because we need to reduce the amount of government debt. Now I know we have talked a lot about why this view is mistaken, but what is stopping the government paying for additional spending by just printing money?

A: Do you know the technical name for that idea?

Q: Money financed fiscal expansion rather than bond financed fiscal expansion? And didn’t Friedman have this idea involving a helicopter?

A: Good. And what is meant by Quantitative Easing, or QE for short?

Q: The central bank buying government debt by creating bank reserves?

A: That will do. And what does the term ‘monetary base’ or ‘high powered money’ mean?

Q: That is the amount of money created directly by the central bank, which is either cash held by the public, or reserves held by banks.

A: So if we have a policy that involves both bond financed fiscal expansion, and QE, what does that amount to?

Q: Ah! Is it the same as money financed fiscal expansion?

A: It looks that way. But to properly answer that question, we need consider another. Now what happens if a government spends like there is no tomorrow, and finances it all by printing money?

Q: We get inflation of course. But that is only because the government would be increasing demand even when we have got to full employment. Today when there is deficient demand and high unemployment, adding to demand should not cause an inflation problem.

A:  I agree. But suppose money financed fiscal expansion, or bond financed expansion plus QE, works, and we get back to what you call full employment. What happens to all that money the central bank has created?

Q: Well now there would be too much money chasing too few goods, so the central bank would have to put QE into reverse. Otherwise we would get inflation.

A: OK. Would the world look any different after QE had been reversed, compared to a policy that had just involved bond financed fiscal expansion in the first place?

Q: So what you are saying is that bond financed fiscal expansion with or without QE looks just the same in the long run, as long as QE is reversed.

A: Well you have just said it, having gone down the path I have subtly led you down. Now what have I always told you about the time frame involved with issues involving government debt?

Q: I know I asked the first question, but now I’m doing all the answers. You really should be using different letters from Q and A, like maybe S and T.

A: Ah, the famous dialogs between Socrates and Theaetetus. How nice it is to teach Students in Tutorials who also study philosophy. But what is the answer to my question.

Q: That issues involving government debt are long term, not short term. So if using money to finance extra government spending just involves a temporary increase in money, and no permanent reduction in debt, what is the point?

A: Indeed. Now there might be some point if the government or central bank wanted to signal with QE that it did intend to raise inflation above the target level for some time, which is the same as saying that it would not reverse all of the QE.

Q: But haven’t both the Bank of England and Federal Reserve said they remain totally committed to their inflation targets?

A: Effectively yes, and recently they seem to be content to see inflation below target, so it would be very odd if they were using QE to signal the opposite.

Q: So does that make QE a complete waste of time? And why do you say there would be some point in having higher inflation in the future?

A: I think this is an excellent point at which to end things with some reading for next week. I suggest this paper that Michael Woodford has just written. After that have a look at these blogs (BruegelKimball and DeLongSerlinHamilton, Gagnon) on the effectiveness of QE, and this on nominal GDP targets and fiscal policy. (Try and keep the issues involving instruments, mechanisms and targets separate if you can.)

Q: Not another paper by Woodford! The last time you made us read one of his papers it went into maths just when it was getting interesting.

A: I’m tempted to say that without maths there can be no true knowledge, and this is just the exception that proves that rule.  

Monday 10 September 2012

Democracy in the northern Eurozone: you can choose austerity, or austerity.


Imagine that before the US election the Congressional Budget Office published a detailed analysis of the economic implications of each candidate’s policies for different categories of government spending and taxation, and their impact on GDP, unemployment and much more. These were based on detailed and comprehensive accounts provided by both parties, and not unspecified aggregate numbers with no policies attached such as in the Ryan plan. You would have to agree that this would give voters a more informed choice, but you might also say that it was politically impossible in any country. Well have a look at the Netherlands, which will hold elections on 12th September. That is exactly what happens there, with the analysis provided by their fiscal council, the Bureau for Economic Policy Analysis (CPB). (If you are at all interested, the detail provided in the CPB’s analysis is extraordinary (pdf) – for example, it predicts what impact each party programme will have on greenhouse emissions.)  

That is the positive news. The not so good news is that unemployment is expected by the CPB to rise by 1% over the next two years, and almost none of the major parties are planning to do anything to try and stop this. How could they stop it? Being part of the Eurozone means that fiscal policy is the only aggregate policy tool available. As a result political parties should be planning to raise budget deficits – increasing government spending or cutting taxes – on a temporary basis to keep demand rising in line with supply. Yet only one major party is planning to do this: the far right ‘Party for Freedom’, whose refusal to vote for the deficit reduction plans of the previous coalition brought down the government and sparked this election.

Now increases of 1% in unemployment may seem small beer compared to what is happening in Spain, for example. But unlike Spain, there is no market pressure in the Netherlands to reduce budget deficits. Instead the pressure comes from the Eurozone’s fiscal rules. And it matters because if countries like the Netherlands and Germany are reducing output and increasing unemployment by trying to cut budget deficits, then this makes the task for countries like Spain much more difficult.

General developments in the Eurozone are proceeding as I thought they might when I recklessly forecast that the Euro would survive. Because the process involves a power struggle between different economic ideologies, and countries, it is slow and painful and full of potential hazards and uncertainties: will the conditionality imposed on Spain and Italy to obtain ECB help be light enough to be politically acceptable to these countries, for example. (Paul Collier has an interesting post on this here.) I still worry that Germany might demand Greek exit as a token victory, but I’m relying on wise heads, and the IMF, to make sure that does not happen. However survival will still come at the price of a prolonged Eurozone recession, and here the fiscal rules are the central problem, as the Netherlands illustrates all too clearly.

The voters of the Netherlands are being given some choice, as Matthew Dalton points out . The Liberals (right of centre) want to cut the deficit by reducing spending, while the socialists want to raise taxes on high earners, and would increase the deficit compared to baseline in 2013 (but not 2014). However according to the CPB: “For almost all parties, unemployment will increase, compared to the baseline.” The exception is the right wing Freedom Party: it has the only programme that raises growth (slightly), and it is the only party that plans to increase the deficit in both 2013 and 2014 (all relative to baseline). So voters can vote against what I have previously called budget madness, but only by voting for a party that wants to abolish the minimum wage and halt immigration from non-Western countries.

I started this post with the CPB, so let me finish with them as well. The CPB is in a delicate position, as it wants to retain the trust of all the political parties for being impartial. However, in the FT (£) in February (also available here), the Director of the CPB Coen Teulings wrote an article entitled “Eurozone countries must not be forced to meet deficit targets” (jointly written with Jean Pisani-Ferry).  The Dutch central bank, on the other hand, has been calling for the urgent ‘rationalisation’ of the public finances.  (Its head was appointed by the previous coalition that proposed deficit cuts.) Which goes to show that fiscal councils tend to be wise, but central bankers talking about fiscal policy can be – well - not so wise.

Sunday 9 September 2012

Judgement Calls and Microfoundation Tricks


Noah Smith has a really nice piece about when a microfounded model does or does not violate the Lucas critique. (See also this useful post from Bruegel.) Noah suggests that this comes down to a judgement call, which in turn introduced a potential ideological bias. I want to elaborate, and suggest another bias that may result: a bias towards simplicity. However I also want to suggest a further bias that potentially undercuts the methodological rationale behind microfoundations.

The key idea behind the Lucas critique was that models should be derived from ’deep’ parameters, like agents preferences or technological parameters. These were parameters that could reasonably be described as independent of the way monetary policy was conducted. The target of the Lucas critique was models where expectations formation was implicit in the model’s equations: even if you only half believed in rational expectations, changes in how monetary policy was done would change how expectations were formed, and therefore change those equations.

Noah argues that whether a parameter is independent of policy is essentially a judgement – our evidence base is not good enough to show us one way or another. Where you have judgement, various biases, including ideological views, can get in. I think this is right, but I also suspect the point will not bother most macroeconomists too much. They are – rightly or wrongly – fairly happy with treating preference parameters as exogenous, whereas treating expectations processes as independent of policy seems clearly problematic. (I appeal here to what most macroeconomists will think, and not what is right. In a recent post, for example, I argue that people’s preferences over which party to vote for are pretty malleable.)

However, once you go beyond the very simple RBC type models, the range of deep parameters extends beyond preferences and technology. To take the obvious example, if you want to have something useful to say about monetary policy, you need sticky prices, and these are usually microfounded in terms of Calvo contracts. The deep parameter in Calvo contracts is the probability that a firm’s price will change each period. Is this parameter independent of monetary policy?

The paper by Chari et al to which Noah refers puts the same point in a slightly different way. If the parameters of the model are not deep (independent), then the implied shocks to the model will not be ‘structural’ i.e. identifiable and independent of policy. They look at the shock processes typically included in New Keynesian models, and split them into two groups: potentially structural shocks, which include technology shocks, and dubiously structural shocks, which include mark-up shocks.

How do Chari et al decide which of these two categories shocks should be classified in? Noah would say judgement, whereas the authors would say microeconomic evidence. However this is not a debate I want to get into, interesting though it is. Instead I want to agree with Chari et al: the shocks in New Keynesian models are pretty dubious, and their deep parameters, like the Calvo parameter, are not obviously invariant to policy.

So why do New Keynesian models contain problematic features like Calvo contracts? Calvo contracts are a ‘trick’, by which I mean a device that allows you to get sticky prices into a model in a reasonably tractable way. Doing this job ‘properly’ might involve adding menu costs into the model, but this quickly gets intractable. So Calvo contracts are a trick that acts ‘as if’ firms were faced by menu costs. But whether this trick works – whether Calvo contracts really do mimic what an otherwise intractable model with menu costs would show – is inevitably a judgement call.

Because these judgement calls are problematic, there is a bias towards avoiding them by keeping the model simple. Here Chari et al are explicit. “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.”

Suppose we do not follow this tradition, and instead attempt to explain more aspects of the data by building models that incorporate dubious judgement calls. I think we then have to recognise that these judgement calls will be influenced not just by the microeconomic evidence, or ideology as Noah suggests, but also on the need to have models that explain the real world. That is I believe a quite reasonable thing to do, but as Chari et al point out it does mean potentially compromising the internally consistency of the model (and therefore its immunity from the Lucas critique). As I have argued at length elsewhere (articleworking paper), microfounded models have become dominant because they have let the evidence influence model structure through a back door. Individual equations may no longer be selected by directly confronting the data, but the data has influenced the judgemental calls involved in the microfoundations. 

Friday 7 September 2012

Republican voters and the BBC


From a European perspective, the US election should be no contest, as these results from an opinion survey by Pew indicate.


Now if you are a Republican, this may just confirm your view that Europeans are forced to live in just the kind of socialist state that Obama wants to turn the US into, and that we have been brainwashed into not knowing any better. However I think it has rather more to do with things like:

1) Many Republican Party supporters campaign to have intelligent design taught on an equal footing to Darwinian evolution in schools.

2) Climate change denial appears to govern Party policy on what may well be the greatest threat mankind currently faces.

3) Party leaders appear to at best tolerate, if not promulgate, the idea that tax cuts (particularly for the rich) will increase tax revenue, despite all the evidence to the contrary.

4) Veracity seems to be in short supply in Party speeches and propaganda in the current election.

In other words significant sections of the Republican Party seem to have a problem with reality, or as Paul Krugman says, facts have a liberal bias.

Of course the two major parties in the UK have their more extreme elements, but these are either much less extreme, or much more marginal, than in the Republican Party. Indeed, it is generally thought that when this ceased to be the case for the Labour Party in the 1980s, it did Labour great harm in terms of the popular vote. Labour leaders learnt that attacking these extreme elements within their own Party won them votes.

There appears to be another puzzle, and that is that many Republican voters seem to be voting against their own economic interest. Residents of red states tend to receive more government transfers than blue states. More generally, how can the poor possibly vote for a party which is so devoted to tax cuts for the (very) rich and reducing aid to the poor?

Now one possible answer to this puzzle is that facts, veracity and economic benefits are not at the top of many voters list of what they look for in politicians. It’s all about values. ‘Culture Wars’, traditionalists versus progressives, that kind of thing. Perhaps voters in Kansas just value individualism and religious belief more than European voters, and creationism, climate change denial and tax cuts convey these values.

I’m well outside my comfort zone here, but I want to suggest an alternative answer. In the decades around the 1970s, there was much discussion in the UK about the apparent enigma of the Conservative working class voter.  Previously it was argued that voting, and the two political parties in the UK, had been split along class lines – the Labour Party could be said to represent the working class. How then to explain that a growing number of the working class were apparently voting against their own party? In a paper written in 1967, the sociologist Frank Parkin turned that question on its head. The dominant culture, he suggested, was conservative with a small ‘c’. The relevant enigma was therefore not the working class Conservative, but the Labour voter of any class. The latter could be understood by considering the strength of working class networks (e.g. trade unions) that could resist the influence of the dominant conservative culture, and the changing strength of institutions promulgating that dominant culture. Of particular importance was the growing influence of radio and television as conveyors of information and values.

In the UK the state played a central role in the development of radio and television, through the BBC. It appears as if most European countries followed a similar model. Now this can have severe disadvantages if the state decides to take too much control, but in many European countries there are various safeguards designed to minimise the influence of the particular party in power over what is broadcast. I want to suggest that this set up has some important implications. State controlled media will tend to be centrist in its outlook, and dismissive of extremes. It will also try and reflect establishment views and opinions, which include academic scientific opinion and mainstream religious views. Partly as a result, a political party which appeared to tolerate the views listed above would be given a hard time. I’m also pretty sure that party leaders would not be able to get away with being as ‘economical with the truth’ as Ryan’s convention speech was.

In the US public sector broadcasting is not a major force. Now there is a chicken and egg problem here: perhaps the US media model may reflect different values, like a greater aversion to state power. However it could also just be a consequence of the political power of corporations in the US at a particular point in time. As Robert McChesney documents, it was not inevitable that the private sector model that now dominates in the US should have emerged, although once it was established it became easier to sustain this position.  My key point is that this lack of a major state presence in TV and radio makes it easier for those with money to try and control the information and social values promulgated by media.

In the UK we are used to newspaper barons manipulating news and opinion to further their own or a political party’s views on particular issues. However the BBC, together with a legal requirement on other TV channels to be politically balanced, limits the scope of newspapers to manipulate information or change values. Surveys repeatedly show much higher levels of trust in the BBC compared to the media in general. Undoubtedly the press does have considerable influence in the UK, but few would argue that it alone could fundamentally change the political landscape. However, if both TV and radio were able to work to the same model as newspapers, such manipulation becomes a distinct possibility.    

Just as Frank Parkin argued that the working class Conservative voter was not an enigma in the UK once we thought about the dissemination of information and values, so the steady drift to the right of the Republican Party could be explained in similar terms. Paul Krugman argues that pundits who describe America as a fundamentally conservative country are wrong.  What I am suggesting is that the drift to the right of the Republican Party may be a function of the ownership structure of the media in the US. If true, this raises two questions. First, has this process in the US been there since the invention of radio and TV, or is it more recent, and if so why?  (Simon Johnson amongst others suggest it started with Reagan.) [Postscript 2: Mark Sadowski provides the answer which I should have known about below: the repeal of Fairness Doctrine.] Second, does the two party system in the US provide a limit to the power that money can have over the media, or does the trend have further to go?

Postscript 1:  To the extent that there is a section in the US media that likes to position itself as centrist, it often appears to deliberately avoid recognising what is going on: see here, for example (HT, as ever, MT) 

Wednesday 5 September 2012

Zero Lower Bound Denial


There is a great deal in Mike Woodford’s Jackson Hole paper. What was new to me was his comprehensive discussion of Quantitative Easing (QE). I hope I’m being objective in reading his account as being very sceptical of what QE can do, if it is not signalling intentions about future interest rate policy. The meat here is in the discussion of portfolio balance effects in section 3, but this paragraph from the conclusion is worth quoting in full.

“Central bankers confronting the problem of the interest-rate lower bound have tended to be especially attracted to proposals that offer the prospect of additional monetary stimulus while (i) not requiring the central bank to commit itself with regard to future policy decisions, and (ii) purporting to alter general financial conditions in a way that should affect all parts of the economy relatively uniformly, so that the central bank can avoid involving itself in decisions about the allocation of credit. Unfortunately, the belief that methods exist that can be effective while satisfying these two desiderata seems to depend to a great extent on wishful thinking.”

While I believe macroeconomists practicing demand denial represent a minority (albeit still a distressingly important minority), I think what I might call Zero Lower Bound (ZLB) denial is far more prevalent. What I mean by this is a belief that somehow monetary policy alone can overcome the problem of the ZLB. It is in many ways a perfectly understandable belief, reflecting what I have called the consensus assignment developed and implemented during the Great Moderation, which was (rightly) seen as an advance on the bad old days where fiscal policy was routinely used for demand stabilisation. Nevertheless the belief is incorrect, and damaging.

It is incorrect for two reasons. The first is summed up in the quote above. Monetary policy that involves temporarily creating money to buy financial assets is of an order less effective and reliable than conventional monetary policy, or fiscal policy. The second is not addressed in Mike’s paper, but is just as important. Even if you follow the Krugman/Woodford idea of using commitments about future interest rate setting to mitigate the recession today (which is equivalent to permanently creating more money), this does not mean that you can forget about fiscal policy. To put it another way, fiscal policy would still be a vital stabilisation tool at the ZLB even if the central bank targeted nominal GDP (NGDP). It is reluctance to accept this last point which is a particular characteristic of ZLB denial.

I have elaborated on this second point before, but let me sketch the reasoning in a different way. Suppose we only think about demand shocks, and suppose the central bank is all powerful and prescient, so absent the ZLB any demand shock can be completely offset through monetary policy. Inflation and output are related through a Phillips curve with no cost-push shocks, so keeping output at its natural rate keeps inflation on track, and NGDP on track. What the central bank actually targets does not matter here, because inflation, output, price level or NGDP targeting would all be perfectly successful in terms of desired levels of output and inflation.

Now suppose we have a large negative demand shock so that we hit the ZLB. We will hit the ZLB whether we have an inflation target or NGDP target. In both cases output falls below the natural rate and current inflation is too low (below its desired level). We will miss today’s NGDP target. What the NGDP target tomorrow does is reduce the impact of the shock on current output and inflation, because inflation that is too low today means the central bank will aim for inflation and output above their normally desired levels tomorrow (to hit the NGDP target tomorrow), which supports output today through expectations effects.[1]  

We can now make two key points. First, the ZLB still matters. NGDP targets help reduce (but not eliminate) the cost of the ZLB today, but only by incurring costs in the future. Second, fiscal policy could in principle eliminate all of these costs. A fiscal stimulus today could eliminate the ZLB constraint, allowing desired output and inflation to be achieved today and tomorrow. Equally, fiscal austerity today moves inflation and output further away from desired levels both today and tomorrow, even if we have NGDP targets.

This is why I keep irritating some by going on about fiscal policy when commenting on the current conjuncture. Apart from a few academic caveats, I’m a fully paid up member of the ‘monetary policy is all you need’ club outside of a potential ZLB or monetary union.  My views on monetary policy targets are very similar to, and have been heavily influenced by, those of Michael Woodford. But the ZLB does make a difference. It is a feature of the real world, not a consequence of any particular monetary policy strategy. ZLB denial, particularly in the hands of independent central banks, leads not just to wishful thinking, but encourages governments to make bad fiscal policy decisions.


[1] Having a NGDP target tomorrow only becomes useful (and different from an inflation target) if we miss the NGDP target today. We could eliminate the ZLB constraint today by having a much more expansionary monetary policy tomorrow (which exceeded any NGDP target), but that has greater costs tomorrow (and is also less credible today). 

Monday 3 September 2012

What type of model should central banks use?


As a follow up to my recent post on alternatives to microfounded models, I thought it might be useful to give an example of where I think an alternative to the DSGE approach is preferable. I’ve talked about central bank models before, but that post was partly descriptive, and raised questions rather than gave opinions. I come off the fence towards the end of this post.

As I have noted before, some central banks have followed academic macroeconomics by developing often elaborate DSGE models for use in both forecasting and policy analysis. Now we can all probably agree it is a good idea for central banks to look at a range of model types: DSGE models, VARs, and anything else in between. (See, for example, this recent advert from Ireland.) But if the models disagree, how do you judge between them? For understandable reasons, central banks like to have a ‘core’ model, which collects their best guesses about various issues. Other models can inform these guesses, but it is good to collect them all within one framework. Trivially you need to make sure your forecasts for the components of GDP are consistent with the aggregate, but more generally you want to be able to tell a story that is reasonably consistent in macroeconomic terms.

Most central banks I know use structural models as their core model, by which I mean models that contain equations that make use of much more economic theory than a structural VAR. They want to tell stories that go beyond past statistical correlations. Twenty years ago, you could describe these models as Structural Econometric Models (SEMs). These used a combination of theory and time series econometrics, where the econometrics was generally at the single equation level. However in the last few years a number of central banks, including the Bank of England, have moved towards making their core model an estimated DSGE model. (In my earlier post I described the Bank of England’s first attempt which I was involved with, BEQM, but they have since replaced this with a model without that core/periphery design, more like the conical ECB model of Smets-Wouters.)

How does an estimated DSGE model differ from a SEM? In the former, the theory should be internally consistent, and the data is not allowed to compromise that consistency. As a result, data has much less influence over the structure of individual equations. Suppose, for example, you took a consumption function from a DSGE model, and looked at its errors in predicting the data. Suppose I could show you that these errors were correlated with asset prices: when house prices went down, people saved more. I could also give you a good theoretical reason why this happened: when asset prices were high, people were able to borrow more because the value of their collateral increased. Would I be allowed to add asset prices into the consumption function of the DSGE model? No, I would not. I would instead have to incorporate the liquidity constraints that gave rise to these effects into the theoretical model, and examine what implications it had for not just consumption, but also other equations like labour supply or wages. If the theory involved the concept of precautionary saving, then as I indicated here, that is a non-trivial task. Only when that had been done could I adjust my model.

In a SEM, things could move much more quickly. You could just re-estimate the consumption function with an additional term in asset prices, and start using that. However, that consumption function might well now be inconsistent with the labour supply or wage equation. For the price of getting something nearer the data, you lose the knowledge that your model is internally consistent. (The Bank’s previous model, BEQM, tried to have it both ways by adding variables like asset prices to the periphery equation for consumption, but not to the core DSGE model.)

Now at this point many people think Lucas critique, and make a distinction between policy analysis and forecasting. I have explained elsewhere why I do not put it this way, but the dilemma I raise here still applies if you are just interested in policy analysis, and think internal consistency is just about the Lucas critique. A model can satisfy the Lucas critique (be internally consistent), and give hopeless policy advice because it is consistently wrong. A model that does not satisfy the Lucas critique can give better (albeit not perfectly robust) policy advice, because it is closer to the data.

So are central banks doing the right thing if they make their core models estimated DSGE models, rather than SEMs? Here is my argument against this development. Our macroeconomic knowledge is much richer than any DSGE model I have ever seen. When we try and forecast, or look at policy analysis, we want to use as much of that knowledge as we can, particularly if that knowledge seems critical to the current situation. With a SEM we can come quite close to doing that. We can hypothesise that people are currently saving a lot because they are trying to rebuild their assets. We can look at the data to try and see how long that process may last. All this will be rough and ready, but we can incorporate what ideas we have into the forecast, and into any policy analysis around that forecast. If something else in the forecast, or policy, changes the value of personal sector net assets, the model will then adjust our consumption forecast. This is what I mean about making reasonably consistent judgements.

With a DSGE model without precautionary saving or some other balance sheet recession type idea influencing consumption, all we see are ‘shocks’:  errors in explaining the past. We cannot put any structure on those shocks in terms of endogenous variables in the model. So we lose this ability to be reasonably consistent. We are of course completely internally consistent with our model, but because our model is an incomplete representation of the real world we are consistently wrong. We have lost the ability to do our second best.

Now I cannot prove that this argument against using estimated DSGE models as the core central bank model is right. It could be that, by adding asset prices into the consumption function – even if we are right to do so – we make larger mistakes than we would by ignoring them completely, because we have not properly thought through the theory. The data provides some check against that, but it is far from foolproof. But equally you cannot prove the opposite either. This is another one of those judgement calls.

So what do I base my judgement on? Well how about this thought experiment. It is sometime in 2005/6. Consumption is very strong, and savings are low, and asset prices are high. You have good reason to think asset prices may be following a bubble. Your DSGE model has a consumption function based on an Euler equation, in which asset prices do not appear. It says a bursting house price bubble will have minimal effect. You ask your DSGE modellers if they are sure about this, and they admit they are not, and promise to come back in three years time with a model incorporating collateral effects. Your SEM modeller has a quick look at the data, and says there does seem to be some link between house prices and consumption, and promises to adjust the model equation and redo the forecast within a week. Now choose as a policy maker which type of model you would rather rely on.