Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label Reduced form. Show all posts
Showing posts with label Reduced form. Show all posts

Thursday, 4 June 2015

Multipliers and evidence

I should be more careful with titles. The title of this post may have misled some (including Paul Krugman) to characterise what I was saying as favouring a priori beliefs over evidence. What I was in fact talking about was different kinds of evidence.

One kind of evidence on multipliers comes from directly relating output to some fiscal variable like government spending. In much the same way we could base monetary policy on attempts to relate output or inflation directly to changes in interest rates. This is sometimes called ‘reduced form’ estimation. As I said in my post, these studies are valuable, and if they repeatedly show something different from other evidence that would be worrying. However in my experience I have found them less reliable than evidence based on looking at the structure of the economy. I discuss a personal example here, but two more recent examples where reduced form evidence has not proved robust concern the impact of debt on growth and evidence supporting expansionary austerity.

As I wrote in the recent post: “My priors come from thinking about models, or perhaps more accurately mechanisms, that have a solid empirical foundation.” Again perhaps I was remiss in not emphasising that last clause, but it is critical. Robert Waldmann did interpret what I wrote correctly, but has a more worrying (for me) charge - that my view of what specific structural empirical evidence says is tempered by modern microfounded modelling.

A good example concerns how consumers might react to a temporary increase in income. I wrote that my prior is that consumers will largely discount temporary income changes. But what exactly do I mean by ‘largely’ here? Is a marginal propensity to consume out of temporary income of, say, 0.3 large or not? As I have noted elsewhere, there is good empirical evidence to support a number of that kind, and it is possible to explain this in terms of consumers optimising in the face of uncertainty. 

But the plain vanilla intertemporal consumption model implies a marginal propensity to consume out of temporary income close to (or identical to) zero. So when I wrote “largely discount”, did I in fact temper my knowledge of the empirical evidence because of this basic (and in macro ubiquitous) theory? Or was I attempting to use a form of words which was ambiguous enough not to upset those who did have a strong attachment to the plain vanilla model, which would have been just as bad.

It would be ironic if I had been. On a number of occasions I have argued that it was unfortunate that the microfoundations revolution has completely killed (in the academic literature, if not in all central banks) the alternative of analysing aggregate models where relationships are partly justified by empirical evidence. One of my reasons for believing this to be unfortunate is that it tends to put too much weight on simple theory relative to evidence. When I wrote ‘largely discount’ was I providing an example of just this kind of thing?

If it was, it may only have been a temporary lapse. Paul thinks a multiplier of around 1.5 is reasonable (I assume at the Zero Lower Bound when there will be little or no monetary policy offset), and when I wrote this I also assumed a multiplier of 1.5. However I think the point that Robert was making is a very important one: in macro we seem generally happier falling back on what standard theory says than on what the majority of empirical evidence suggests.     


Monday, 14 April 2014

The Fed’s macroeconomic model

There has been some comment on the decision of the US central bank (the Fed) to publish its main econometric model in full. In terms of openness I agree with Tony Yates that this is a great move, and that the Bank of England should follow. The Bank publishes some details of its model (somewhat belatedly, as I noted here), but as Tony argues this falls some way short of what is now provided by the Fed.

However I think Noah Smith makes the most interesting point: unlike the Bank's model, the model published by the Fed is not a DSGE model. Instead, it is what is often called a Structural Econometric Model (SEM): a pretty ad hoc mixture of theory and econometric estimation that would not please either a macro theorist or a time series econometrician. As Noah notes, they use this model for forecasting and policy analysis. Noah speculates that the Fed’s move to publish a model of this kind indicates that they are perhaps less embarrassed about using a SEM than they once were. I’ve no idea if this is true, but for most academic macroeconomists it raises a puzzling question - why are they still using this type of model? If the Bank of England can use a DSGE model as their core model, why doesn’t the Fed?

I have discussed the question of what type of model a central bank should use before. In addition, I have written many posts (most recently here) advocating the advantages of augmenting DSGE models and VARs with this kind of middle way approach. For various reasons, this middle way approach will be particularly attractive to a policy making organisation like a central bank, but I also think that a SEM can play a role in academic analysis. For the moment, though, let me just focus on policy analysis by policy makers.

Consider a particular question: what is the impact of a temporary cut in income taxes? What kind of methods should an economist employ to answer this question? We could estimate reduced forms/VARs relating variables of interest (output, inflation etc) to changes in income taxes in the past. However there are serious problems with this approach. The most obvious is that the impact of past changes in taxes will depend on the reaction of monetary policy at the time, and whether monetary policy will act in a similar way today. Results will also depend on how permanent past changes in taxes were expected to be. I would not want to suggest that these issues make reduced form estimation a waste of time, but they do indicate how difficult it will be to get a good answer using this approach. Similar problems arise if we relate growth to debt, money to prices (a personal reflection here) and so on. Macro reduced form analysis relating policy variables to outcomes is very fragile.

An alternative would be for the economist to build a DSGE model, and simulate that. This has a number of advantages over the reduced form estimation approach. The nature of the experiment can be precisely controlled: the fact that the tax cut is temporary, how it is financed, what monetary policy is doing etc. But any answer is only going to be as good as the model used to obtain it. A prerequisite for a DSGE model is that all relationships have to be microfounded in an internally consistent way, and there should be nothing ad hoc in the model. In practice that can preclude including things that we suspect are important, but that we do not know exactly how to model in a microfounded manner. We model what we can microfound, not what we can see.

A specific example that is likely to be critical to the impact of a temporary income tax cut is how the consumption function treats income discounting. If future income is discounted at the rate of interest, we get Ricardian Equivalence. Yet this same theory tells us that the marginal propensity to consume (mpc) out of windfall gains in income is very small, and yet there is a great deal of evidence to suggest the mpc lies somewhere around a third or more. (Here is a post discussing one study from today’s Mark Thoma links.) DSGE models can try and capture this by assuming a proportion of ‘income constrained’ consumers, but is that all that is going on? Another explanation is that unconstrained consumers discount future labour income at a much greater rate than the rate of interest. This could be because of income uncertainty and precautionary savings, but these are difficult to microfound, so DSGE models typically ignore this.

The Fed model does not. To quote: “future labor and transfer income is discounted at a rate substantially higher than the discount rate on future income from non-human wealth, reflecting uninsurable individual income risk.” My own SEM that I built 20+ years ago, Compact, did something similar. My colleague, John Muellbauer, has persistently pursued estimating consumption functions that use an eclectic mix of data and theory, and as a result has been incorporating the impact of financial frictions in his work long before it became fashionable.

So I suspect the Fed uses a SEM rather than a DSGE model not because they are old fashioned and out of date, but because they find it more useful. (Actually this is a little more than a suspicion.) Now that does not mean that academics should be using models of this type, but it should at least give pause to those academics who continue to suggest that SEMs are a thing of the past.


Wednesday, 17 April 2013

Reduced form macro


Mark Thoma has a reflective post on the ability of evidence to move us forward in macro. Noah Smith also has interesting things to say. I just want to add the following thought.

If you think about some of the recent disputed empirical results (the 90% debt to GDP, expansionary austerity, cutting spending rather than taxes, multiplier sizes), they all involve relating policy variables directly to outcomes. And if we think about some of the reasons these apparent relationships turned out not to be empirically robust, it was because they failed to think about other things that might matter for outcomes.

Lets be specific. Fiscal multipliers are bound to depend on what monetary policy is doing. In principle monetary policy can offset the impact of fiscal changes on output, but if monetary policy is constrained in some way, it cannot. So any empirical study of the impact of fiscal policy must control for what is happening to monetary policy. I have often written about why high debt may be damaging to growth, but these effects work through raising real interest rates, or discouraging labour supply. It just seems foolish to apply them to a situation where real interest rates are unusually low, and output is hardly constrained by a shortage of labour.

These are simple, obvious points, but its amazing how often they are ignored. It is if some in the profession are desperate to find universal (and perhaps convenient) simple truths, in the face of the obvious fact that the macroeconomy is complex. This is not a new phenomenon. I’m afraid what follows is a personal anecdote, but it is topical.

Monetarism was the centrepiece of Mrs Thatcher’s first government. Following Friedman, policy was based around the idea that there was a predictable causal relationship from the money supply to prices. Lags might be long and variable, but an x% change in the money supply would within a year or two lead to an x% change in prices. Parliament asked the new government to come up with evidence for this assertion. They agreed to, but for some reason I cannot remember, they promised to produce a working paper by a named Treasury economist, rather than some anonymous Treasury document.

At the time I was working in the Treasury, and my job was to help forecast prices. So they chose me to produce this paper. I was to report each week to Terry Burns on progress. Terry Burns had been recently appointed as Chief Economic Advisor and he was one of the architects of the government’s new macroeconomic strategy. The first meeting went fine: I reported that if you regressed prices on the government’s chosen monetary aggregate, you got exactly the relationship they were looking for. However I had remembered some of the econometrics I had been taught. I was worried about omitted variables, and the fact that the two time series were dominated by one particular episode. [1] To cut a long story short, the relationship fell apart if you either took that episode out, or added other explanatory variables like oil prices. Despite Terry and my best efforts, we could not rescue the relationship once you went beyond that first simple regression. To be honest I was not that surprised or unhappy about this, but for the government it was rather embarrassing.

I learnt two things from that episode. The first was to be always extremely distrustful of simple correlations between policy instruments and outcomes. The second occurred after my paper was published. As I was the named author, I was free to write what I thought was an unbiased but purely factual account of my findings, with (to Terry Burns’ credit) no pressure to spin the results to suit the government. Yet despite it being obvious to any objective reader that the results gave no support to the government’s policy, at least one well known city economist cherry picked the results to suggest it did. [2]

Pretty much all the econometric work I did subsequently involved more structural relationships rather than these simple reduced forms. I think we have learnt a great deal from estimating equations that at least try and get close to underlying behavioural relationships, whether its using cross section, time series or panel regressions. A carefully structured VAR may also tell us something. Perhaps an exhaustive robustness analysis running countless single equation regressions can reveal insights - as for example in Xavier Sala-i-Martin's AER paper 'I just ran two million regressions' trying to explain economic growth. But if the empirical evidence involves little more than a regression of outcome x on instrument y, be very very aware.

[1] Just in case anyone is interested, the expansion in M3 caused by the Competition and Credit Control reforms in 1973, and the increase in inflation associated with higher oil prices in 1975.

[2] What happened at that point is a story that I will tell publicly one day. It is probably of no interest except to those who were involved in UK policy at that time, but it reminds me of one of the nicest and most interesting acquaintances I made during my time at the Treasury who is greatly missed.