Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label precautionary saving. Show all posts
Showing posts with label precautionary saving. Show all posts

Tuesday, 28 February 2017

The Budget and Health Care

The reasons for substantially increasing current spending on the NHS and social care are obvious. Here is some data. The first is from the OECD on UK spending on health over a long period as a share of GDP (source).
This reveals an important truth which talk of ‘protecting the NHS’ is deliberately designed to ignore: health spending increases as a share of total GDP over time. The two noticeable points beyond that are the increase in spending under Labour, and the slight decrease in spending under the Coalition government.

One area of health spending that has been particularly hit in the recent past has been spending on social care by local authorities (source).

It is in areas like this that I get so frustrated with TV journalism. I have seen countless segments or interviews on what is causing the current crisis in the NHS and health care, but I do not remember ever seeing graphs like this. Is there an unwritten understanding in the TV networks that people cannot read graphs?

The outlook for the next five years for total health spending is further falls relative to total GDP (source).

The red bars are the projected growth in GDP, and the blue bars the projected growth in health spending. Unless something is done the current crisis in social care and the NHS will get worse and worse.

This increase in spending should be permanent and financed by a permanent increase in taxes. As such a specific tax funded increase in spending would be popular, it seems sensible to do it that way. Given the current crisis in the NHS, if this is not done in the budget we either have to downgrade our assessment of the morality of our current rulers still further, or assume they really do have an ulterior motive in running the NHS into the ground.

What would be the macroeconomic effect of such a policy change? You might expect a permanent tax financed increase in spending to have no effect. Taxes would rise by an equal amount to the extra government spending, and knowing this was permanent consumers would reduce their spending by the full amount of the tax cut. So private spending falls to offset additional public spending.

There are two reasons for thinking that would not be the full story. First, consumers initially appear not to fully adjust consumption to a tax change, even when that tax change is perceived as permanent. This is quite rational if they hold precautionary savings, and wish these savings to adjust to be a constant share of post-tax income. As a result there might be a short term boost to activity from a tax financed spending increase. This could be amplified, of course, if the tax increase was delayed for a year or two. As interest rates are still at their lower bound, such a boost to activity would be welcome.

Second, spending on health care is likely to be less import intensive than the private consumption spending it replaces. This would give a permanent boost to GDP and permanently reduce the current account deficit. Now both these effects might lead to an offsetting exchange rate appreciation, but the consequent reduction in inflation and boost to real incomes that this appreciation brings would not be unwelcome given the impact of Brexit.

Three final points that I hope are obvious. First these beneficial macro effects are incidental in the sense they are not required to justify the spending increase. The case for additional spending on health care financed by higher taxes is overwhelming on its own terms. Second, this is additional to the large increase in public investment, financed by borrowing, that should be underway right now. The changes in this direction in the Autumn Statement were an order of magnitude too small. Third, the second biggest threat to the NHS right now after lack of money are staff shortages. As an important source of staff is the EU, the government seems to be doing everything it can to make things more difficult.  

Saturday, 2 August 2014

US savings behaviour, and empirical research strategies

In this post I want to look at a paper by Chris Carroll, Jiri Slacalek and Martin Sommer for two reasons. The first is for what the paper tells us about US consumption behaviour, and potentially consumption behaviour in any advanced economy. The second thing I want to use it for is as an example of different ways of doing empirical research in a microfoundations world.

The mainstay of modern macroeconomics is the consumption Euler equation, where consumption is proportional to the sum of financial wealth and human wealth, where human wealth is the discounted present value of future labour income. This model implies consumption aims to smooth out erratic movements in income through borrowing and saving. In this model periods of high saving can reflect periods of temporarily higher income, or temporarily high real interest rates. Adaptations of this model that are commonplace are to assume that some proportion of consumers are liquidity constrained, and therefore consume all their income, or that consumption is subject to ‘habits’, which generates additional inertia. This model with or without these adaptations is not very helpful in explaining why savings rose sharply in the Great Recession.

Rather more worrying is that this model is not very good at explaining US savings behaviour before the Great Recession either. As I noted here, US savings rates fell steadily for about twenty years from the early 1980. You might think that explaining such a large and important trend would be a sine qua non of any consumption function routinely used in macromodels, but you would be wrong. Consistency with the data is not the admissibility criteria for a microfounded macromodel.

The Carroll et al paper finds two explanations for the pre-recession trend and the increase in savings during the recession. The first is easier credit conditions, and the second is employment uncertainty. The mechanism through which both work is precautionary savings. If the risk increases that your income will fall sharply because you will lose your job, you need to build up some capital to act as a buffer. The easier credit is to obtain, the less precautionary savings you need.

The reason why precautionary savings represents a significant departure from the basic Euler equation model is intuitive. If you want to hold a certain amount of precautionary savings, you have a target for wealth. A wealth target pulls in the opposite direction to consumption smoothing. If you have a one-off increase in income, consumption smoothing says you should consume it very gradually, perhaps only consuming the interest. The marginal propensity to consume that extra income is tiny. But this leaves wealth higher for a very long time. If you have a wealth target, your marginal propensity to consume that additional income will be larger, perhaps a lot larger.

Now for the methodology part. These empirical results are in sections 3 and 4 of their paper. They call their empirical results in section 4  ‘reduced form’, because they come from a regression relating saving to wealth, credit constraints and expected unemployment. However the authors feel that this is not enough. In section 2 they discuss a structural theoretical model. Because modelling labour income uncertainty is very difficult, their microfounded model assumes that once someone becomes unemployed, they become unemployed forever. Section 5 then estimates this structural model.

The authors describe a number of reasons why directly estimating the structural model may be better than estimating the reduced form. But in order to get their structural model they have to make the highly unrealistic assumption noted above. The reduced form, on the other hand, does not have this assumption imposed on it. So I do not think we can say that the results in Section 5 are more or less interesting than those in Section 4, which is why both are interesting, and why both are included in the paper. There does not seem to be any compelling reason to elevate one above the other.

OK, a last  - perhaps wild - pair of questions. Is it the case that, compared to a few decades ago, there are far fewer papers in the top journals that simply try and explain historical time series for a single key macro aggregate (like consumption or saving)? If that is the case, is this due to the difficulties in getting microfounded models to fit, or something else?  

Monday, 14 April 2014

The Fed’s macroeconomic model

There has been some comment on the decision of the US central bank (the Fed) to publish its main econometric model in full. In terms of openness I agree with Tony Yates that this is a great move, and that the Bank of England should follow. The Bank publishes some details of its model (somewhat belatedly, as I noted here), but as Tony argues this falls some way short of what is now provided by the Fed.

However I think Noah Smith makes the most interesting point: unlike the Bank's model, the model published by the Fed is not a DSGE model. Instead, it is what is often called a Structural Econometric Model (SEM): a pretty ad hoc mixture of theory and econometric estimation that would not please either a macro theorist or a time series econometrician. As Noah notes, they use this model for forecasting and policy analysis. Noah speculates that the Fed’s move to publish a model of this kind indicates that they are perhaps less embarrassed about using a SEM than they once were. I’ve no idea if this is true, but for most academic macroeconomists it raises a puzzling question - why are they still using this type of model? If the Bank of England can use a DSGE model as their core model, why doesn’t the Fed?

I have discussed the question of what type of model a central bank should use before. In addition, I have written many posts (most recently here) advocating the advantages of augmenting DSGE models and VARs with this kind of middle way approach. For various reasons, this middle way approach will be particularly attractive to a policy making organisation like a central bank, but I also think that a SEM can play a role in academic analysis. For the moment, though, let me just focus on policy analysis by policy makers.

Consider a particular question: what is the impact of a temporary cut in income taxes? What kind of methods should an economist employ to answer this question? We could estimate reduced forms/VARs relating variables of interest (output, inflation etc) to changes in income taxes in the past. However there are serious problems with this approach. The most obvious is that the impact of past changes in taxes will depend on the reaction of monetary policy at the time, and whether monetary policy will act in a similar way today. Results will also depend on how permanent past changes in taxes were expected to be. I would not want to suggest that these issues make reduced form estimation a waste of time, but they do indicate how difficult it will be to get a good answer using this approach. Similar problems arise if we relate growth to debt, money to prices (a personal reflection here) and so on. Macro reduced form analysis relating policy variables to outcomes is very fragile.

An alternative would be for the economist to build a DSGE model, and simulate that. This has a number of advantages over the reduced form estimation approach. The nature of the experiment can be precisely controlled: the fact that the tax cut is temporary, how it is financed, what monetary policy is doing etc. But any answer is only going to be as good as the model used to obtain it. A prerequisite for a DSGE model is that all relationships have to be microfounded in an internally consistent way, and there should be nothing ad hoc in the model. In practice that can preclude including things that we suspect are important, but that we do not know exactly how to model in a microfounded manner. We model what we can microfound, not what we can see.

A specific example that is likely to be critical to the impact of a temporary income tax cut is how the consumption function treats income discounting. If future income is discounted at the rate of interest, we get Ricardian Equivalence. Yet this same theory tells us that the marginal propensity to consume (mpc) out of windfall gains in income is very small, and yet there is a great deal of evidence to suggest the mpc lies somewhere around a third or more. (Here is a post discussing one study from today’s Mark Thoma links.) DSGE models can try and capture this by assuming a proportion of ‘income constrained’ consumers, but is that all that is going on? Another explanation is that unconstrained consumers discount future labour income at a much greater rate than the rate of interest. This could be because of income uncertainty and precautionary savings, but these are difficult to microfound, so DSGE models typically ignore this.

The Fed model does not. To quote: “future labor and transfer income is discounted at a rate substantially higher than the discount rate on future income from non-human wealth, reflecting uninsurable individual income risk.” My own SEM that I built 20+ years ago, Compact, did something similar. My colleague, John Muellbauer, has persistently pursued estimating consumption functions that use an eclectic mix of data and theory, and as a result has been incorporating the impact of financial frictions in his work long before it became fashionable.

So I suspect the Fed uses a SEM rather than a DSGE model not because they are old fashioned and out of date, but because they find it more useful. (Actually this is a little more than a suspicion.) Now that does not mean that academics should be using models of this type, but it should at least give pause to those academics who continue to suggest that SEMs are a thing of the past.


Wednesday, 25 July 2012

Consumption and Complexity – limits to microfoundations?


One of my favourite papers is by Christopher D. Carroll: "A Theory of the Consumption Function, with and without Liquidity Constraints." Journal of Economic Perspectives, 15(3): 23–45. This post will mainly be a brief summary of the paper, but I want to raise two methodological questions at the end. One is his, and the other is mine.

Here are some quotes from the introduction which present the basic idea:

“Fifteen years ago, Milton Friedman’s 1957 treatise A Theory of the Consumption Function seemed badly dated. Dynamic optimization theory had not been employed much in economics when Friedman wrote, and utility theory was still comparatively primitive, so his statement of the “permanent income hypothesis” never actually specified a formal mathematical model of behavior derived explicitly from utility maximization. Instead, Friedman relied at crucial points on intuition and verbal descriptions of behavior. Although these descriptions sounded plausible, when other economists subsequently found multiperiod maximizing models that could be solved explicitly, the implications of those models differed sharply from Friedman’s intuitive description of his ‘model.’...”

“Today, with the benefit of a further round of mathematical (and computational) advances, Friedman’s (1957) original analysis looks more prescient than primitive. It turns out that when there is meaningful uncertainty in future labor income, the optimal behavior of moderately impatient consumers is much better described by Friedman’s original statement of the permanent income hypothesis than by the later explicit maximizing versions.”

The basic point is this. Our workhorse intertemporal consumption (IC) model has two features that appear to contradict Friedman’s theory:

1)      The marginal propensity to consume (mpc) out of transitory income is a lot smaller than the ‘about one third’ suggested by Friedman.

2)      Friedman suggested that permanent income was discounted at a much higher rate than the real rate of interest

However Friedman stressed the role of precautionary savings, which are ruled out by assumption in the IC model. Within the intertemporal optimisation framework, it is almost impossible to derive analytical results, let alone a nice simple consumption function, if you allow for labour income uncertainty and also a reasonable utility function.

What you can now do is run lots of computer simulations where you search for the optimal consumption plan, which is exactly what the papers Carroll discusses have done. The consumer has the usual set of characteristics, but with the important addition that there are no bequests, and no support from children. This means that in the last period of their life agents consume all their remaining resources. But what if, through bad luck, income is zero in that year. As death is imminent, there is no one to borrow money from. So it therefore makes sense to hold some precautionary savings to cover this eventuality. Basically death is like an unavoidable liquidity constraint. If we simulate this problem using trial and error with a computer, what does the implied ‘consumption function’ look like?

To cut a long (and interesting) story short, it looks much more like Friedman’s model. In effect, future labour income is discounted at a rate much greater than the real interest rate, and the mpc from transitory income is more like a third than almost zero. The intuition for the latter result is as follows. If your current income changes, you can either adjust consumption or your wealth. In the intertemporal model you smooth the utility gain as much as you can, so consumption hardly adjusts and wealth takes nearly all the hit. But if, in contrast, what you really cared about was wealth, you would do the opposite, implying an mpc near one. With precautionary saving, you do care about your wealth, but you also want to consumption smooth. The balance between these two motives gives you the mpc.

There is a fascinating methodological issue that Carroll raises following all this. As we have only just got the hardware to do these kinds of calculation, we cannot even pretend that consumers do the same when making choices. More critically, the famous Freidman analogy about pool players and the laws of physics will not work here, because you only get to play one game of life. Now perhaps, as Akerlof suggests, social norms might embody the results of historical trial and error across society. But what then happens when the social environment suddenly changes? In particular, what happens if credit suddenly becomes much easier to get?

The question I want to raise is rather different, and I’m afraid a bit more nerdy. Suppose we put learning issues aside, and assume these computer simulations do give us a better guide to consumption behaviour than the perfect foresight model. After all, the basics of the problem are not mysterious, and holding some level of precautionary saving does make sense. My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically.

I think the implications of this for microfounded macro are profound. The whole point about a microfounded model is that you can mathematically check that one relationship is consistent with another. To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?

Note that this problem is not due to two of the usual suspects used to criticise microfounded models: aggregation or immeasurable uncertainty. We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known. Instead the source of the problem is simply complexity. I will discuss how you might handle this problem, including a solution proposed by Carroll, in a later post.