Mark Thoma has a reflective post on the ability of evidence to move us forward in macro. Noah Smith also has interesting things to say. I just want to add the following thought.
If you think about some of the recent disputed empirical results (the 90% debt to GDP, expansionary austerity, cutting spending rather than taxes, multiplier sizes), they all involve relating policy variables directly to outcomes. And if we think about some of the reasons these apparent relationships turned out not to be empirically robust, it was because they failed to think about other things that might matter for outcomes.
Lets be specific. Fiscal multipliers are bound to depend on what monetary policy is doing. In principle monetary policy can offset the impact of fiscal changes on output, but if monetary policy is constrained in some way, it cannot. So any empirical study of the impact of fiscal policy must control for what is happening to monetary policy. I have often written about why high debt may be damaging to growth, but these effects work through raising real interest rates, or discouraging labour supply. It just seems foolish to apply them to a situation where real interest rates are unusually low, and output is hardly constrained by a shortage of labour.
These are simple, obvious points, but its amazing how often they are ignored. It is if some in the profession are desperate to find universal (and perhaps convenient) simple truths, in the face of the obvious fact that the macroeconomy is complex. This is not a new phenomenon. I’m afraid what follows is a personal anecdote, but it is topical.
Monetarism was the centrepiece of Mrs Thatcher’s first government. Following Friedman, policy was based around the idea that there was a predictable causal relationship from the money supply to prices. Lags might be long and variable, but an x% change in the money supply would within a year or two lead to an x% change in prices. Parliament asked the new government to come up with evidence for this assertion. They agreed to, but for some reason I cannot remember, they promised to produce a working paper by a named Treasury economist, rather than some anonymous Treasury document.
At the time I was working in the Treasury, and my job was to help forecast prices. So they chose me to produce this paper. I was to report each week to Terry Burns on progress. Terry Burns had been recently appointed as Chief Economic Advisor and he was one of the architects of the government’s new macroeconomic strategy. The first meeting went fine: I reported that if you regressed prices on the government’s chosen monetary aggregate, you got exactly the relationship they were looking for. However I had remembered some of the econometrics I had been taught. I was worried about omitted variables, and the fact that the two time series were dominated by one particular episode.  To cut a long story short, the relationship fell apart if you either took that episode out, or added other explanatory variables like oil prices. Despite Terry and my best efforts, we could not rescue the relationship once you went beyond that first simple regression. To be honest I was not that surprised or unhappy about this, but for the government it was rather embarrassing.
I learnt two things from that episode. The first was to be always extremely distrustful of simple correlations between policy instruments and outcomes. The second occurred after my paper was published. As I was the named author, I was free to write what I thought was an unbiased but purely factual account of my findings, with (to Terry Burns’ credit) no pressure to spin the results to suit the government. Yet despite it being obvious to any objective reader that the results gave no support to the government’s policy, at least one well known city economist cherry picked the results to suggest it did. 
Pretty much all the econometric work I did subsequently involved more structural relationships rather than these simple reduced forms. I think we have learnt a great deal from estimating equations that at least try and get close to underlying behavioural relationships, whether its using cross section, time series or panel regressions. A carefully structured VAR may also tell us something. Perhaps an exhaustive robustness analysis running countless single equation regressions can reveal insights - as for example in Xavier Sala-i-Martin's AER paper 'I just ran two million regressions' trying to explain economic growth. But if the empirical evidence involves little more than a regression of outcome x on instrument y, be very very aware.
 Just in case anyone is interested, the expansion in M3 caused by the Competition and Credit Control reforms in 1973, and the increase in inflation associated with higher oil prices in 1975.
 What happened at that point is a story that I will tell publicly one day. It is probably of no interest except to those who were involved in UK policy at that time, but it reminds me of one of the nicest and most interesting acquaintances I made during my time at the Treasury who is greatly missed.
Wasn't the 'monetarism' of the Thatcher government different from Friedman's because the latter would have required the technocratic 'nationalisation' of the entire banking sector, so the Thatcher Cabinet instead went for central bank definitions of the money supply?ReplyDelete
Friedman disowned the experiment, I think, and had the Cabinet and the nationalised BoE followed his plan then levels of UK unemployment would have been even more severe.
Slightly off point, but did you have occasion to look at Nicholas Kaldor's work on this (published in The Scourge of Monetarism and The Economic Consequences of Mrs. Thatcher)?ReplyDelete