Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday, 15 August 2012

House prices, consumption and aggregation

A simplistic view of the link between house prices and consumption is that lower house prices reduce consumers’ wealth, and wealth determines consumption, so consumption falls. But think about a closed economy, where the physical housing stock is fixed. Housing does not provide a financial return. So if house prices fall, but aggregate labour income is unchanged, then if aggregate consumption falls permanently the personal sector will start running a perpetual surplus. This does not make sense.

The mistake is that although an individual can ‘cash in’ the benefits of higher house prices by downgrading their house, if the housing stock is fixed that individual’s gain is a loss for the person buying their house. Higher house prices are great for the old, and bad for the young, but there is no aggregate wealth effect.

As a result, a good deal of current analysis looks at the impact house prices may have on collateral, and therefore on house owners ability to borrow. Higher house prices in effect reduce a liquidity or credit constraint. Agents who are credit constrained borrow and spend more when they become less constrained. There is no matching reduction in consumption elsewhere, so aggregate consumption rises. If it turns out that this was a house price bubble, the process goes into reverse, and we have a balance sheet recession[1]. In this story, it is variations in the supply of credit caused by house prices that are the driving force behind consumption changes. Let’s call this a credit effect.

There is clear US evidence that house price movements were related to changes in borrowing and consumption. That would also be consistent with a wealth effect as well as a credit constraint story, but as we have noted, in aggregate the wealth effect should wash out.

Or should it? Let’s go back to thinking about winners and losers. Suppose you are an elderly individual, who is about to go into some form of residential home. You have no interest in the financial position of your children, and the feeling is mutual. You intend to finance the residential home fees and additional consumption in your final years from the proceeds of selling your house. If house prices unexpectedly fall, you have less to consume, so the impact of lower house prices on your consumption will be both large and fairly immediate. Now think about the person the house is going to be sold to. They will be younger, and clearly better off as a result of having to fork out much less for the house. If they are the archetypal (albeit non-altruistic) intertemporal consumer, they will smooth their additional wealth over the rest of their life, which is longer than the house seller. So their consumption rises by less than the house seller’s consumption falls, which means aggregate consumption declines for some time. This is a pure distributional effect, generated by life-cycle differences in consumption.

In aggregate, following a fall in house prices, the personal sector initially moves into surplus (as the elderly consume less), and then it moves into deficit (as the elderly disappear and the young continue to spend their capital gains). In the very long run we go back to balance. This reasoning assumes that the house buyer is able to adjust to any capital gains/losses over their entire life. But house buyers tend to be borrowers, and are therefore more likely to be credit constrained. So credit effects could reverse the sign of distributional effects.

This is a clear case where micro to macro modelling, of the kind surveyed in the paper by Heathcote, Storesletten and Violante, is useful in understanding what might happen. An example related to UK experience is a paper by Attanasio, Leicester and Wakefield (earlier pdf here). This tries to capture a great deal of disaggregation, and allows for credit constraints, limited (compared to the Barro ideal) bequests and much more, in a partial equilibrium setting where house price and income processes are exogenous. The analysis is only as good as its parts, of course, and I do not think it allows for the kind of irrationality discussed here. In addition, as housing markets differ significantly between countries, some of their findings could be country specific.

Perhaps the most important result of their analysis is that house prices are potentially very important in determining aggregate consumption. According to the model, most movements in UK consumption since the mid-1980s are caused by house price shocks rather than income shocks. In terms of the particular mechanism outlined above, their model suggests that the impact of house prices on the old dominate those on the young, despite credit constraints influencing the latter more. In other words the distributional effect of lower house prices on consumption is negative. Add in a collateral credit effect, and the model predicts lower house prices will significantly reduce aggregate consumption, which is the aggregate correlation we tend to observe.

But there remains an important puzzle which the paper discusses but does not resolve. In the data, in contrast to the model, consumption of the young is more responsive to house price changes than consumption of the old. The old appear not to adjust their consumption following house price changes as much as theory suggests they should, even when theory allows a partial bequest motive. So there remain important unresolved issues about how house prices influence consumption in the real world.



[1] This is like the mechanism in the Eggertsson and Krugman paper, although that paper is agnostic about why borrowing limits fall. They could fall as a result of greater risk aversion by banks, for example.

Monday, 13 August 2012

ECB conditionality exceeds their mandate


To get a variety of views on this issue, read this post  from Bruegel . Here is my view.

We can think of the governments of Ireland or Spain facing a multiple equilibria problem when trying to sell their debt. There is a good equilibrium, where interest rates on this debt are low  and fiscal policy is sustainable. There is a bad equilibrium, where interest rates are high, and because of this default is possible at some stage. Because default is possible, a high interest rate makes sense – hence the term equilibrium.

Countries with their own central bank and sustainable fiscal policy can avoid the bad equilibria, because the central bank would buy sufficient government debt to move from the bad to the good. (See this pdf by Paul De Grauwe.) The threat that they would do this means they may not need to buy anything. Anyone who speculates that interest rates will rise will lose money, so the interest rate immediately drops to the low equilibrium.

How do markets know the central bank will do this, if that central bank is independent? They might reason that independence would be taken away by the government if the central bank refused. But suppose independence was somehow guaranteed. Well, they might look at what the central bank is doing. If it is already buying government debt as part of a Quantitative Easing (QE) programme, then as long as the same conditions remain the high interest outcome would not be an equilibrium.

Suppose instead that the central bank does not have a QE programme, and announces that it will only undertake one if the country concerned agrees to sell some of its debt to other countries under certain onerous conditions, and agreement is uncertain. We are of course talking about the ECB. Now the bad equilibrium becomes a possibility again. Perhaps the country will not agree to these onerous conditions. As Kevin O’Rourke points out, this possibility is quite conceivable for a country like Italy. Equally, based on past experience, the lenders may only agree if there is partial default. Neither of these things needs to be inevitable, just moderately possible – after all, interest rates are high only because there is a non-negligible chance of default. The ECB also says that even if the country and its potential creditors agree, it may still choose not to buy that country’s bonds. This throws another lifeline to the existence of a bad equilibrium.

So, we have moved from a situation where the bad equilibrium does not exist, to one where it can. As the good equilibrium is clearly better than the bad one, there must be some very good reason for the ECB to impose this kind of conditionality. What could it be?

The ECB’s mandate is price stability. So without conditionality, would there be an increased risk of inflation? One concern is that printing more money to buy government debt will raise inflation. But that does not appear to be a concern in the UK and US, for two very good reasons. First, the economy is in recession, or experiencing a pretty weak recovery. Second, central bank purchases of government debt are reversible, if inflation did look like it was becoming a serious problem.

What about the danger that by buying bonds now, when there is no inflation risk, governments will be encouraged to follow imprudent fiscal policies at other times when inflation is an issue. But why would the ECB buy government bonds in that situation? Buying bonds now does not commit the ECB to do so in the future. No one thinks the Fed will be doing QE in a boom. OK, what about all those ‘structural reforms’ that might not occur if the bad equilibria disappeared? Well, quite simply, that is none of the ECB’s business. It has nothing to do with price stability. If the ECB is worrying about structural reforms, it is exceeding its mandate.

Cannot the same argument – that an issue is not germane to price stability - be used about choosing between the good and bad equilibria? No. The bad equilibrium, because it forces countries like Ireland and Spain to undertake excessive austerity (and because it may influence the provision of private sector credit in those countries), is reducing output and will therefore eventually reduce inflation below target. The only ‘conditionality’ the ECB needs to avoid moral hazard is that intervention will take place only if the country in the bad equilibrium is suffering an unnecessarily severe recession. The ECB can decide itself whether this is the case by just looking at the data.

So, in my view, to embark on unconditional and selective QE in the current situation is within the price stability mandate of the ECB. To impose conditionality in the way it is doing is not within its mandate. Unfortunately, as Carl Whelan points out, this is not the first time the ECB has exceeded its mandate. As he also says, if the Fed or Bank of England made QE conditional on their governments undertaking certain ‘structural reforms’ or fiscal actions, there would be outrage. So why do so many people write as if it acceptable for the ECB to do this?

Saturday, 11 August 2012

Handling complexity within microfoundations macro


In a previous post I looked at a paper by Carroll which suggested that the aggregate consumption function proposed by Friedman looked rather better than more modern intertemporal consumption theory might suggest, once you took the issue of precautionary saving seriously. The trouble was that to show this you had to run computer simulations, because the problem of income uncertainty was mathematically intractable. So how do you put the results of this finding into a microfounded model?

While I want to use the consumption and income uncertainty issue as an example of a more general problem, the example itself is very important. For a start, income uncertainty can change, and we have some evidence that its impact could be large. In addition, allowing for precautionary savings could make it a lot easier to understand important issues, like the role of liquidity constraints or balance sheet recessions.

I want to look at three responses to this kind of complexity, which I will call denial, computation and tricks. Denial is straightforward, but it is hardly a solution. I mention it only because I think that it is what often happens in practice when similar issues of complexity arise. I have called this elsewhere the streetlight problem, and suggested why it might have had unfortunate consequences in advancing our understanding of consumption and the recent recession.

Computation involves embracing not only the implications of the precautionary savings results, but also the methods used to obtain them as well. Instead of using computer simulations to investigate a particular partial equilibrium problem (how to optimally plan for income uncertainty), we put lots of similar problems together and use the same techniques to investigate general equilibrium macro issues, like optimal monetary policy.

This preserves the internal consistency of microfounded analysis. For example, we could obtain the optimal consumption plan for the consumer facing a particular parameterisation of income uncertainty. The central bank would then do its thing, which might include altering that income uncertainty. We then recompute the optimal consumption plan, and so on, until we get to a consistent solution.

We already have plenty of papers where optimal policy is not derived analytically but through simulation.(1) However these papers typically include microfounded equations for the model of the economy (the consumption function etc). The extension I am talking about here, in its purest form, is where nothing is analytically derived. Instead the ingredients are set out (objectives, constraints etc), and (aside from any technical details about computation) the numerical results are presented – there are no equations representing the behaviour of the aggregate economy.

I have no doubt that this approach represents a useful exercise, if robustness is investigated appropriately. Some of the very interesting comments to my earlier post did raise the question of verification, but while that is certainly an issue, I do not see it as a critical problem. But could this ever become the main way we do macroeconomics? In particular, if results from these kinds of black box exercises were not understandable in terms of simpler models or basic intuition, would we be prepared to accept them? I suspect they would be a complement to other forms of modelling rather than a replacement, and I think Nick Rowe agrees, but I may be wrong. It would be interesting to look at the experience in other fields, like Computable General Equilibrium models in international trade for example.

The third way forward is to find a microfoundations 'trick'. By this I mean a set up which can be solved analytically, but at the cost of realism or generality. Recently Carroll has done just that for precautionary saving, in a paper with Patrick Toche. In that model a representative consumer works, has some probability of becoming unemployed (the income uncertainty), and once unemployed can never be employed again until they die. The authors suggest that this set-up can capture a good deal of the behaviour that comes out of the computer simulations that Carroll discussed in his earlier paper.

I think Calvo contracts are a similar kind of trick. No one believes that firms plan on the basis that the probability of their prices changing is immutable, just as everyone knows that one spell of unemployment does not mean that you will never work again. In both cases they are a device that allows you to capture a feature of the real world in a tractable way.

However, these tricks do come at a cost, which is how certain we can be of their internal consistency. If we derive a labour supply and consumption function from the same intertemporal optimisation problem, we know these two equations are consistent with each other. We can mathematically prove it. Furthermore, we are content that the underlying parameters of that problem (impatience, the utility function) are independent of other parts of the model, like monetary policy. Now Noah Smith is right that this contentment is a judgement call, but it is a familiar call. With tricks like Calvo contracts, we cannot be that confident. This is something I hope to elaborate on in a subsequent post. 

This is not to suggest that these tricks are not useful – I have used Calvo contracts countless times. I think the model in Carroll and Toche is neat. It is instead to suggest that the methodological ground on which these models stand is rather shakier as a result of these tricks. We can never write ‘I can prove the model is internally consistent’, but just ‘I have some reasons for believing the model may be internally consistent’. Invariance to the Lucas critique becomes a much bigger judgement call.

There is another option that is implicit in Carroll’s original paper, but perhaps not a microfoundations option. We use computer simulations of the kind he presents to justify an aggregate consumption function of the kind Friedman suggested. Aggregate equations would be microfounded in this sense (there need be no reference to aggregate data), but they would not be formally (mathematically) derived. Now the big disadvantage of this approach is that there is no procedure to ensure the aggregate model is internally consistent. However, it might be much more understandable than the computation approach (we could see and potentially manipulate the equations of the aggregate model), and it could be much more realistic than using some trick. I would like to add it as a fourth possible justification for starting macro analysis with an aggregate model, where aggregate equations were justified by references to papers that simulated optimal consumer behaviour.  

(1) Simulation analysis can make use of mathematically derived first order conditions, so the distinction here is not black and white. There are probably two aspects to the distinction that are important for the point at hand, generality and transparency of analysis, with perhaps the latter being more important. My own thoughts on this are not as clear as I would like.

Thursday, 9 August 2012

Giving Economics a Bad Name


Greg Mankiw is known to every economist and economics student, if only because of his best selling textbook. John Taylor is known to every macroeconomist, if only because of the large number of bits of macro with his name on it (Taylor rule, Taylor contracts etc). Both are respected by other academics because of the quality and influence of their academic work.

With two others, they recently wrote this about the Obama administration’s attempts to stimulate the economy through fiscal policy after the recession: “The negative effect of the administration’s ‘stimulus’ policies has been documented in a number of empirical studies.” They then quote from two studies. The first looks at a minor aspect of the stimulus packages, the Cash for Clunkers attempt to bring forward car purchases. There are other studies of this programme which are more favourable. The second study is co-authored by John Taylor, and others have interpreted his findings differently.

No other studies are directly referred to. That might just be because the overwhelming majority suggest that the stimulus package worked. Dylan Matthews on Ezra Klein's blog documents them here. As I wrote in a recent post, the evidence is about as clear as it ever is in macro. Which is not too surprising, as it is what Mankiw’s textbook suggests, and it is what the New Keynesian theory both authors have contributed to suggests.

Now the quote comes from a paper prepared for the Romney presidential campaign. It is clearly political in tone and intent. As both academics are Republican supporters, it may therefore seem par for the course. But it should not be. The Romney campaign publicised this paper because it was written by academics – experts in their field. It allows those who oppose fiscal stimulus to continue to claim that the evidence is on their side – look, these distinguished academics say so.

It is one thing for economists to disagree about policy. It would also be fine to say I know the evidence is mixed, but I think some evidence is more reliable. It is not fine to imply that the evidence points in one direction when it points in the other. I say here imply, because the authors do not explicitly say that the majority of studies suggest stimulus is ineffective. If they chose their words carefully, then you have to ask whether ‘intending to mislead’ is any better than ‘misrepresenting the facts’. Was that the intent, or just an isolated unfortunate piece of bad phrasing? All I can say is read the paper and judge for yourself, or this post from Brad DeLong.

This is sad, because it tells us as much about economics as an academic discipline as it does about the individuals concerned. In the past I have imagined something similar happening in physics. It actually stretches the imagination to do so, but if it did, the academics concerned would immediately lose their academic reputation. The credibility of their work would be questioned.  Responding to evidence rather than ignoring it is what distinguishes real science from pseudo science, and doctors from snake oil salesmen.

What can economics as a discipline do about this sad state of affairs? The answer is pretty obvious, to economists in particular, and that is changing the incentives where we can. However we cannot do much about the incentives provided by politics and the media. I have been pretty pessimistic about this in the past, but in a future post I will try and be more positive and talk about one possible way forward. 

Wednesday, 8 August 2012

One more time – good policy takes account of risks, and what happens if they materialise


From the Guardian's report of Mervyn King’s press conference today, where the Bank of England lowers forecast UK growth this year to zero.

Paul Mason of Newsnight suggests that the Bank of England should stop trying to use monetary policy to offset the impact of chancellor George Osborne's fiscal tightening, and call for a Plan B instead.
King rejects the idea, saying that Osborne's plan looked "pretty sensible" back in 2010. Overseas factors have undermined it, he argues.
Now Mervyn King had little choice but to say this, but he is wrong (and probably knows he is wrong) for a simple reason. Even if the post-2010 Budget forecast of 2.8% growth in 2012 had been pretty sensible, there were risks either side. There always are, although the nature of the recession probably made these risks greater than normal. It is what you can do if those risks materialise that matters.

Now if growth had appeared to be stronger than 2.8%, and inflation becomes excessive, the solution is obvious, well tested and effective – the Bank of England raises interest rates. But if growth looked like falling well short of 2.8%, the solution – more Quantitative Easing - is untested and very unclear in its effectiveness. (And before anyone comments, the government knows it has no intention of telling the Bank to abandon inflation targets.) With this basic asymmetry, you do not cross your fingers and hope your forecasts are correct. Instead you bias policy towards trying as far as possible to avoid the bad outcome. You go for 3.5% or 4% growth, knowing that if this produced undesirable inflation you could do something about it. That in turn meant not undertaking the Plan A of severe austerity.

So all the talk about how much austerity, or the Eurozone, or anything else, caused the current UK recession is beside the point when it comes to assessing the wisdom of 2010 austerity. Criticising the Bank of England for underestimating inflation in the past is even more pointless – do those making the criticism really think interest rates should have been higher two or three years ago? Even if the Euro crisis has been unforeseeable bad luck for the government (although I think excessive austerity is having its predictable effect there to), the government should not have put us in a position where we seem powerless to do anything about it.

If you are sailing a ship near land, you keep well clear of the coast, even if it means the journey may take longer.  So the fact that the economy has run aground does not mean the government was just unlucky. You do not embark on austerity when interest rates are near zero. Keynes taught us that, it is in all the textbooks, and a government bears responsibility when it ignores this wisdom. To the extent that the government was encouraged to pursue this course by the Governor of the Bank of England, that responsibility is shared.

Saturday, 4 August 2012

Watching the ECB play chess


Watching Mario Draghi  trying to gradually out manoeuvre some of his colleagues in order to rescue the Eurozone has a certain intellectual fascination, as long as you forget the stakes involved. I’m not an expert on the rules of this game, so I’m happy to leave the blow by blow account to others, such as Storbeck, Fatas, Varoufakis and Whelan.

What I cannot help reflecting on is the intellectual weakness of the position adopted by Draghi’s opponents. These opponents appear obsessed with a particular form of moral hazard: if the ECB intervenes to reduce the interest rates paid by certain governments, this will reduce the pressure on these governments to cut their debt and undertake certain structural reforms. (Alas this concern is often repeated in otherwise more reasonable analysis.) Now one, quite valid, response is to say that in a crisis you have to put moral hazard concerns to one side, as every central bank should know when it comes to a financial crisis. But a difficulty with this line is that it implicitly concedes a false diagnosis of the major problem faced by the Eurozone.

For most Eurozone countries, the crisis was not caused by their governments spending in an unsustainable way, but by their private sectors doing so (for example, Martin Wolf here). The politics are such that the government ends up picking up the tab for imprudent lending by banks. If you want to avoid this happening again, you focus on making sure governments do what they can to prevent excess private sector spending, which means countercyclical fiscal policy, and perhaps breaking the political power that banks have over local politicians.

Trying to do either of these things by forcing excessive austerity on governments is completely counterproductive. You do not encourage countercyclical fiscal policy by making it more pro-cyclical. In addition, creating major recessions in these countries makes it more, not less, likely that banks will be bailed out. Forcing excessive austerity, as well as doing nothing to deal with the underlying causes of the crisis, may even have made the short term problem of default risk worse. Not only have the size of any bank bailouts increased because of domestic recession, but in the case of Greece excessive austerity has generated political instability which also increases default risk.

In a monetary union, a ‘punishment’ for allowing excessive private sector spending (and therefore the incentive to avoid it) is automatic: the economy becomes uncompetitive and must deflate relative to its partners to bring its prices back into line. Adjustment should be painful for creditors and debtors alike. However there are two clear cut reasons why this deflation should be gradual rather than sharp. The first is the Phillips curve: gradual deflation to adjust the price level is much more efficient than rapid deflation. The second is aversion to nominal wage cuts, which makes getting significant negative inflation very costly.

It is in this context that the game of chess being played at the ECB seems so divorced from macroeconomic reality. By delaying intervention, and insisting on conditionality, the ECB is complicit in creating unnecessarily severe recessions in many Eurozone countries, and may even be making the problem of high interest rates on government debt worse. As the interest rate the ECB sets is close to the zero lower bound, it is almost powerless deal with the consequences for aggregate Eurozone activity, so the Eurozone as a whole enters an unnecessary recession.  The OECD is forecasting a -4% output gap for the Euro area in 2013, and only an inflation nutter would call that as a success for the ECB.

It gets worse. By not using its power (which no one doubts) to lower interest rates on government debt, it has allowed a crisis of market confidence to become a distributional struggle between Eurozone countries. So in effect one set of governments started financing another, on terms that make it very difficult for debtors to pay, and so the crisis becomes one that could threaten the cohesion of the Eurozone itself.  The ‘you will have to leave’ threats to Greece are just a particularly nasty manifestation of this.

There is a line that some people take that the current crisis shows that a partial economic union, where fiscal policy remains under the control of nation states, is inevitably flawed, and that the only long term solution for the Euro area is fiscal as well as monetary union. I think that case is unproven. If the ECB had undertaken a programme of Quantitative Easing, directed (as any such programme should be) at markets where high interest rates were damaging the economy, then economies would have been able to focus on restoring competitiveness in a controlled and efficient manner. That was never going to be easy or painless, but it need not have led to the scale of recession, and the political discord, that we are now seeing.

The current crisis certainly reveals shortcomings in the original design of the Euro. In my view these shortcomings could have been (and still could be) solved, if those in charge had looked at what was actually happening and applied basic macroeconomic principles and ideas. We have perpetual crisis today because too many European policymakers (and, with politicians’ encouragement, perhaps also voters) are looking at events through a kind of Ordoliberal and anti-Keynesian prism. If the current crisis reveals anything, it is how misguided this ideological perspective is.  

Thursday, 2 August 2012

Currency Misalignments and Current Accounts




One of my favourite journal paper titles is Xavier Sala-i-Martin’s AER paper ‘I just ran two million regressions’. The problem that paper tries to deal with is that there are too many potential variables that you could conceivably put in an equation explaining differences in economic growth rates among countries. There is then a serious danger of (intentional or otherwise) data mining. A researcher may want to establish that their pet new variable is important in determining growth, so they try lots of different regressions. When one set of additional variables are included the pet new variable is significant, but when another set is used it is not. Only the first group of regressions are published. Sala-i-Martin’s paper uses techniques that involve looking at all possible permutations of variables, in order to try and assess which are robust, in the sense of tending to be significant whatever else is in the regression.  

A recent ECB working paper by Ca’Zorzi, Chudik and Dieppe does something similar with models of the medium term current account. Why is this important? In my view it’s a key ingredient in being able to say something about exchange rate misalignments. This idea is associated in particular with the work of John Williamson, who christened the approach Fundamental Equilibrium Exchange Rates, or FEER for short. (That led to probably the best title of any of the papers I have co-authored – ‘Are Our FEERs justified’ – where we test the FEER approach against PPP[1].) John’s most recent analysis, co-authored with William Cline, can be found here. This or very similar approaches often go by different names: in Peter Isard’s nice survey it is called the macroeconomic balance approach, and it continues to be used (along with other methods) by the IMF.

The idea behind the FEER approach is to model trade flows as a function of the real exchange rate and activity levels. In the medium term activity levels will be determined from the supply side i.e. the output gap will tend to zero. So if we think we know about this supply side, and we know what the current account will be in the medium term, we can back out the medium term real exchange rate. We can then form a view about the extent to which current exchange rates are misaligned (or, more precisely, what expected interest rate differentials would have to be to justify current exchange rates). I’ve used this approach on a number of occasions in the past: perhaps most notably, to try and assess what Euro/Sterling exchange rate the UK should have entered the EuroZone at if it had decided to join in 2003.

The main problem with this approach is working out what the medium term current account should be. Actual current accounts are a poor guide, because they are influenced by both noise and short term factors, like the economic cycle and currency misalignment. In long term equilibrium it is reasonable to assume that the current account should be zero, because the current account is the change in national wealth. However we know that current accounts can show persistent surpluses or deficits over many years. Intertemporal consumption theory gives us some ideas, but on its own it is not that helpful. Many other factors may matter, such as countries having different demographic profiles.  With no clear encompassing theory to use, empirical studies of the kind cited above may be our best guide.

Incidentally, the New Open Economy Macro (NOEM) approach, which is currently the most widely used microfounded open economy framework, essentially uses the same idea as the FEER: see for example this study by Obstfeld and Rogoff. It is more concerned with microfoundations, and less with data, but it shares with the FEER approach a focus on imperfectly competitive markets for internationally traded goods. As far as I know these authors have never acknowledged Williamson as a precursor, and I’m not sure why. As a result, many macroeconomists think NOEM invented this way of thinking about medium term exchange rates.

The details of which variables the authors of the ECB study find are important in determining medium term current accounts are probably not of wide enough interest to discuss in this post. What is more topical is that they use their robust models to estimate what underlying current accounts currently are for the US, UK, Japan and China. Perhaps unsurprisingly they find that, although the US would be in deficit and China in surplus, the numbers are much smaller than the deficits and surpluses observed in the recent past. More controversial, perhaps, is that they find Japan should also be running a deficit. In the past I and others have tended to assume surpluses for Japan, but this was always partly based on demographic features which were coming to an end, which is maybe what has now happened.

One slightly disappointing aspect of the study is that they did not look at Germany. There is some debate about the extent to which German surpluses represent a temporary misalignment of real exchange rates within the Eurozone, or whether they may be partly structural. The answer is rather important in assessing the extent to which deflation is required outside Germany, and it would have been very interesting to know what this study had to say on this issue.                   



[1] I should add that I take no credit for the title - I think it came from Rebecca.