Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label Consumption. Show all posts
Showing posts with label Consumption. Show all posts

Saturday, 29 April 2017

The Brexit slowdown begins (probably)

When the Bank of England after the Brexit vote forecast 0.8% GDP growth in 2017, they expected consumption growth to decline to just 1%, with only a small fall in the savings ratio. But consumption growth proved much stronger in the second half of 2016 than the Bank had expected. As this chart from the Resolution Foundation shows, pretty well all the GDP growth through 2016 was down to consumption growth, something they rightly describe as unsustainable. (If consumption is growing but the other components of GDP are not, that implies consumers are eating into their savings. That cannot go on forever)


This strong growth in consumption in 2016 led the Bank to change its forecast. By February
their forecast for 2017 involved 2% growth in consumption and GDP, and a substantial fall in the savings ratio.

What was going on here? In August, the Bank reasoned that consumers would recognise that Brexit would lead to a significant fall in future income growth, and that they would quickly start reducing their consumption as a result. When that didn’t happen the Bank appeared to adopt something close to the opposite assumption, which is that consumers would assume that Brexit would have little impact on expected income growth. As a result, in the Bank’s February forecast, the savings ratio was expected to decline further in 2018 and 2019, as I noted here. Consumers, in this new forecast, would continually be surprised that income growth was less than they had expected.

The first estimate for 2017 Q1 GDP that came out yesterday showed growth of only 0.3%, about half what the Bank had expected in February. This low growth figure appeared to be mainly down to weakness in sectors associated with consumption (although we will not get the consumption growth figure until the second GDP estimate comes out). So what is going on?

There are three possible explanations. The first, which is the least likely, is that 2017 Q1 is just a blip. The second is that many more consumers are starting to realise that Brexit will indeed mean they are worse off (I noted some polling evidence suggesting that here.), and are now adjusting their spending accordingly The third is that consumption was strong at the end of 2016 because people were buying overseas goods before prices went up as a result of the Brexit deprecation.

If you have followed me so far, you can get an idea of how difficult this kind of forecasting is, and why the huge fuss the Brexiteers made about the August to February revision to the Bank’s forecast was both completely overblown and also probably premature. All Philip Hammond could manage to say about the latest disappointing growth data was how it showed that we needed ‘strong and stable’ government! I suspect, however, that we might be hearing a little less about our strong economy in the next few weeks.

Of course growth could easily pick up in subsequent quarters, particularly if firms take advantage of the temporary ‘sweet spot’ created by the depreciation preceding us actually leaving the EU. Forecasts are almost always wrong. But even if this happens, what I do not think most journalists have realised yet is just how inappropriate it is to use GDP as a measure of economic health after a large depreciation. Because that depreciation makes overseas goods more expensive to buy, people in the UK can see a deterioration in their real income and therefore well being even if GDP growth is reasonable. As I pointed out here, that is why real earnings have fallen since 2010 even though we have had positive (although low) growth in real GDP per head, and as I pointed out here that is why Brexit will make the average UK citizen worse off even if GDP growth does not decline. If it does decline, that just makes things worse.  

Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Monday, 14 April 2014

The Fed’s macroeconomic model

There has been some comment on the decision of the US central bank (the Fed) to publish its main econometric model in full. In terms of openness I agree with Tony Yates that this is a great move, and that the Bank of England should follow. The Bank publishes some details of its model (somewhat belatedly, as I noted here), but as Tony argues this falls some way short of what is now provided by the Fed.

However I think Noah Smith makes the most interesting point: unlike the Bank's model, the model published by the Fed is not a DSGE model. Instead, it is what is often called a Structural Econometric Model (SEM): a pretty ad hoc mixture of theory and econometric estimation that would not please either a macro theorist or a time series econometrician. As Noah notes, they use this model for forecasting and policy analysis. Noah speculates that the Fed’s move to publish a model of this kind indicates that they are perhaps less embarrassed about using a SEM than they once were. I’ve no idea if this is true, but for most academic macroeconomists it raises a puzzling question - why are they still using this type of model? If the Bank of England can use a DSGE model as their core model, why doesn’t the Fed?

I have discussed the question of what type of model a central bank should use before. In addition, I have written many posts (most recently here) advocating the advantages of augmenting DSGE models and VARs with this kind of middle way approach. For various reasons, this middle way approach will be particularly attractive to a policy making organisation like a central bank, but I also think that a SEM can play a role in academic analysis. For the moment, though, let me just focus on policy analysis by policy makers.

Consider a particular question: what is the impact of a temporary cut in income taxes? What kind of methods should an economist employ to answer this question? We could estimate reduced forms/VARs relating variables of interest (output, inflation etc) to changes in income taxes in the past. However there are serious problems with this approach. The most obvious is that the impact of past changes in taxes will depend on the reaction of monetary policy at the time, and whether monetary policy will act in a similar way today. Results will also depend on how permanent past changes in taxes were expected to be. I would not want to suggest that these issues make reduced form estimation a waste of time, but they do indicate how difficult it will be to get a good answer using this approach. Similar problems arise if we relate growth to debt, money to prices (a personal reflection here) and so on. Macro reduced form analysis relating policy variables to outcomes is very fragile.

An alternative would be for the economist to build a DSGE model, and simulate that. This has a number of advantages over the reduced form estimation approach. The nature of the experiment can be precisely controlled: the fact that the tax cut is temporary, how it is financed, what monetary policy is doing etc. But any answer is only going to be as good as the model used to obtain it. A prerequisite for a DSGE model is that all relationships have to be microfounded in an internally consistent way, and there should be nothing ad hoc in the model. In practice that can preclude including things that we suspect are important, but that we do not know exactly how to model in a microfounded manner. We model what we can microfound, not what we can see.

A specific example that is likely to be critical to the impact of a temporary income tax cut is how the consumption function treats income discounting. If future income is discounted at the rate of interest, we get Ricardian Equivalence. Yet this same theory tells us that the marginal propensity to consume (mpc) out of windfall gains in income is very small, and yet there is a great deal of evidence to suggest the mpc lies somewhere around a third or more. (Here is a post discussing one study from today’s Mark Thoma links.) DSGE models can try and capture this by assuming a proportion of ‘income constrained’ consumers, but is that all that is going on? Another explanation is that unconstrained consumers discount future labour income at a much greater rate than the rate of interest. This could be because of income uncertainty and precautionary savings, but these are difficult to microfound, so DSGE models typically ignore this.

The Fed model does not. To quote: “future labor and transfer income is discounted at a rate substantially higher than the discount rate on future income from non-human wealth, reflecting uninsurable individual income risk.” My own SEM that I built 20+ years ago, Compact, did something similar. My colleague, John Muellbauer, has persistently pursued estimating consumption functions that use an eclectic mix of data and theory, and as a result has been incorporating the impact of financial frictions in his work long before it became fashionable.

So I suspect the Fed uses a SEM rather than a DSGE model not because they are old fashioned and out of date, but because they find it more useful. (Actually this is a little more than a suspicion.) Now that does not mean that academics should be using models of this type, but it should at least give pause to those academics who continue to suggest that SEMs are a thing of the past.


Tuesday, 8 April 2014

When the definition of a recession matters

The official definition of a recession in nearly all developed economies except the US is two consecutive quarters of negative growth. In the US a recession is ‘called’ by the NBER. Economists, of course, just look at the numbers. This is obviously the sensible thing to do, because a fall in GDP of 3% followed by positive growth of 0.1% is clearly worse than two periods of -0.1% growth, but only the latter is an official recession. The media on the other hand behaves differently, so we had the silly situation in the UK before 2013 when tiny revisions to GDP led to headlines like ‘UK avoids double dip recession’.

Yet this minor annoyance for people like me has been turned into an opportunity in a recent paper (pdf) by two political scientists at the LSE (HT David Rueda). Andrew Eggers and Alexander Fouirnaies look at the data to see if the announcement of a recession causes any additional impact on macroeconomic aggregates compared to what you might expect from the GDP data itself. In other words, does the announcement of a recession reduce consumption or investment in OECD countries, conditional on actual economic fundamentals? For ease I’ll call this an announcement effect.

For investment they get the answer that economists would hope for - there is no announcement effect. Firms are well informed, and just look at the numbers. However for consumption they do find a significant announcement effect, both in terms of the actual data (and the size of the impact can be non-trivial) and in terms of consumer confidence indicators. One final result they emphasise, which makes clear sense from a macro point of view, is that the impact of recession announcements on consumer spending in smaller in countries with more robust social safety nets.

There are many reasons why this is interesting, but let me focus on one that I have discussed before. In this post I pointed to a potential paradox. On the one hand I believe that for most macroeconomic problems, rational expectations rather than naive expectations is the right place to start. On the other hand I also think that media reporting can have a strong influence on the average persons view on certain highly politicised issues, like is man-made climate change a serious problem, or how important is the cost of welfare fraud. I discussed this paradox here, and argued that it could easily be resolved by thinking about the costs and benefits of obtaining information. In particular, the costs of researching climate change are significant, whereas the cost to the individual of getting their own view wrong is almost zero. (This is just a variation on the paradox of voting.)

In the example from this paper, we have a standard macroeconomic problem, which is trying to assess what level of consumption to choose. The importance of the announcement effect suggests that for consumers the costs of ‘looking at the numbers’ (and, of course, interpreting them) to some extent exceeds the benefits of going beyond media headlines. If the media can have an influence on something that clearly has a significant financial pay-off for individuals, then it is bound to influence attitudes when the personal costs of making mistakes is almost zero.  

Sunday, 22 December 2013

Some notes on the UK recovery

The latest national accounts data we have is for 2013 Q3. Between 2012Q4 and 2013Q3 real GDP increased by 2.1% (actual, not annual rate). Not a great number, but it represented three continuous quarters of solid growth, which we had not seen since 2007. So where did this growth come from? The good news is that investment over that same period rose by 4%. (This and all subsequent figures are the actual 2013Q3/2012Q4 percentage growth rate.) Business investment increased (2.7%), public investment did not (0.5%), but dwellings investment rose by 8%. The bad news is that exports rose by only 0.1%. Government consumption increased by 1.0%.


Over half of the increase in GDP was down to a 1.8% rise in consumption. Not huge, but significant because it represented a large fall in the savings ratio, as this chart shows.


The large increase in saving since 2009 is a major factor behind the recession. The recovery this year is in large part because the savings ratio has begun to fall. We should be cautious here, because data on the savings ratio is notoriously subject to revision. However if we look at the main component of income, compensation of employees, this rose by 3.4%, while nominal consumption rose by 4.4%, again indicating a reduction in savings.  

So the recovery so far is essentially down to less saving/more borrowing, with a minor contribution from investment in dwellings (house building). As Duncan Weldon suggests, the Funding for Lending scheme may be an important factor here. However it may also just be the coming to an end of a balance sheet adjustment, with consumers getting their debts and savings nearer a place they want them to be following the financial crash.

I cannot help but repeat an observation that I have made before at this point. Macro gets blamed for not foreseeing the financial crisis, although I suspect if most macroeconomists had seen this data before the crash they would have become pretty worried. But what macro can certainly be blamed for is not having much clue about the proportion of consumers who are subject to credit constraints, and for those who are not, what determines their precautionary savings: see this post for more. This is why no one really knew when the savings ratio would start coming down, and no one really knows when this will stop.

Some people have argued that we should be suspicious about this recovery, because it involves consumers saving less and borrowing more. Some of the fears behind this are real. One fear is that, encouraged by Help to Buy, the housing market will see a new bubble, and many people will get burned as a result. Another is that some households will erroneously believe that ultra low interest rates are here forever, and will not be able to cope when they rise. But although these are legitimate concerns, which macroprudential policy should try and tackle, the truth is that one of the key ways that monetary policy expands the economy is by getting people to spend more and save less. So if we want a recovery, and the government does not allow itself fiscal stimulus, and Europe remains depressed because of austerity, this was always going to be how it happens. [1]

However there is a legitimate point about a recovery that comes from a falling savings ratio which is that the savings ratio cannot go on falling forever. The moment it stops falling, consumption growth will match income growth. The hope must be that it will continue for long enough to get business investment rising more rapidly, and for the Eurozone to start growing again so that exports can start increasing. But the big unknown remains productivity. So far, the upturn in growth does not seem to have been accompanied by an upturn in productivity. In the short term that is good because it reduces unemployment, but if it continues it will mean real wages will not increase by much, which in turn will mean at some point consumption growth will slow.

There is a great set of graphs in this post at Flip Chart Fairy Tales which illustrate the scale of the productivity problem. (Rick - apologies for not discovering your blog earlier.) For example the OBR, in November 2010, were expecting real wages in 2015 to be 10% higher than in their recent Autumn Statement forecast. We will not recover the ground lost as a result of the recession until productivity growth starts exceeding pre-recession averages. As Martin Wolf and I suggest, the Chancellor should be focusing on the reasons for the UK’s productivity slowdown rather than obsessing about the government’s budget deficit.

[1] In theory it could have happened through a large increase in investment. However the experience of the recession itself, and more general evidence, suggests that investment is strongly influenced by output growth. That is why investment has not forged ahead as a result of low interest rates, and why firms continue to say that a shortage of finance is not holding them back. Having said that, I would have prefered the government to try fiscal incentives to bring forward investment rather than implement measures aimed at raising house prices. 


Wednesday, 13 November 2013

How to be a New Keynesian and an Old Keynesian at the same time

A recurring theme in economics blogs, particularly those that tend to be disparaging of mainstream Keynesian theory, is that Keynesians like to be New Keynesian (NK) when talking about theory, but Old Keynesian (OK) when talking about policy. John Cochrane has recently made a similar observation, which is picked up by Megan McArdle. To take just one example of this alleged sin, in the basic New Keynesian theory Ricardian Equivalence holds (see below), so a tax financed stimulus should be as effective as a debt financed stimulus, yet Keynesians always seem to prefer debt financed stimulus.

The difference between Old and New that Cochrane focuses on relates to models of consumption. In the first year textbook OK model, consumption just depends on current income. The coefficient on current income is something like 0.7, which gives rise to a significant multiplier: give these consumers more to spend, and the additional spending will itself generate more output, which leads to yet more income, and so the impact of any stimulus gets multiplied up.

Basic NK models employ the construct of the (possibly infinitely lived) intertemporal consumer. To explain, these consumers look at the present value of their expected lifetime income, and the income of their descendents if they care about them (hence infinitely lived). This has two implications. First, temporary shocks to current income will have very little impact on NK consumption (it is a drop in the ocean of lifetime income). The marginal propensity to consume out of that temporary income (mpc) is near zero, so no multiplier on that account. Second, a tax cut today means tax increases tomorrow, leaving the present value of lifetime post-tax income unchanged, so NK consumers just save a tax cut (Ricardian Equivalence), whereas OK consumers spend most of it. However NK consumers are sensitive to the real interest rate, so if higher output today leads to higher inflation but the nominal interest rate remains unchanged, then you get a multiplier of sorts because NK consumers react to lower real interest rates by spending more.

So far, so different. But the NK consumption model assumes that agents can borrow whatever they need to borrow. There are good theoretical reasons why that is unlikely to be true (e.g. asymmetric information), and even better empirical evidence that it is not. Empirical studies that look for ‘natural experiments’, where agents obtain an unexpected increase in post-tax income which is likely to be temporary, typically find a mpc of around a third (even for non-durables), rather than almost zero as the basic intertemporal model would predict. (For just one recent example: Consumer Spending and the Economic Stimulus Payments of 2008, by Parker, Souleles, Johnson, and McClelland, American Economic Review 2013, 103(6): 2530–2553.)

So if mainstream Keynesian theory wants a more realistic model of consumption, it often uses the (admittedly crude) device of assuming the economy contains two types of consumer: the unconstrained intertemporal type and the credit constrained type. A credit constrained consumer that receives additional income could consume all of that additional income, so their mpc out of current income is one. [1] That credit constrained consumer is therefore rather Old Keynesian in character. But there are also plenty of unconstrained consumers around (e.g. savers) who are able to behave like intertemporal maximisers, so by including both types of consumer in one model you get a hybrid OK/NK economy.

So it is perfectly possible to be an Old Keynesian and a New Keynesian at the same time, using this hybrid model. It may not be a particularly elegant model, and the microfoundations can be a bit rough, but plenty of papers have been published along these lines. It is a lot more realistic than either the simple NK or OK alternatives. It explains why you might favour a bond financed stimulus over the tax financed alternative, because there are plenty of credit constrained consumers around who are the opposite of Ricardian.[2] 

You can make the same point about one of the other key differences between OK and NK: the Phillips curve. The New Keynesian Phillips curve relates inflation to expected inflation next period, and assumes rational expectations, while a more traditional Phillips curve combined with adaptive expectations relates current inflation to past inflation. While I do not think you will find many economists using the OK Phillips curve on its own nowadays, you will find many (including this lot) using a hybrid that combines the two. The theoretical reasons for doing so are not that clear, but there is plenty of evidence that seems to support this hybrid structure. So once again it makes sense to be both OK and NK when giving policy advice.

Neither story is as exciting as the idea that New Keynesians are really closet Old Keynesians, who only pay lip service to New Keynesian theory to gain academic respectability. Instead it’s a story of how mainstream Keynesian economists try to adapt their models to be more consistent with the real world. How dull, boring and inelegant is that!



[1] I say ‘could’ here, because if the increase in income lasts for less time than the expected credit constraint, then smoothing still applies, and the mpc will be less than one.


[2] My own view is that the mpc out of temporary income is also significant because of precautionary savings: see the paper by Carroll described here.

Wednesday, 15 August 2012

House prices, consumption and aggregation

A simplistic view of the link between house prices and consumption is that lower house prices reduce consumers’ wealth, and wealth determines consumption, so consumption falls. But think about a closed economy, where the physical housing stock is fixed. Housing does not provide a financial return. So if house prices fall, but aggregate labour income is unchanged, then if aggregate consumption falls permanently the personal sector will start running a perpetual surplus. This does not make sense.

The mistake is that although an individual can ‘cash in’ the benefits of higher house prices by downgrading their house, if the housing stock is fixed that individual’s gain is a loss for the person buying their house. Higher house prices are great for the old, and bad for the young, but there is no aggregate wealth effect.

As a result, a good deal of current analysis looks at the impact house prices may have on collateral, and therefore on house owners ability to borrow. Higher house prices in effect reduce a liquidity or credit constraint. Agents who are credit constrained borrow and spend more when they become less constrained. There is no matching reduction in consumption elsewhere, so aggregate consumption rises. If it turns out that this was a house price bubble, the process goes into reverse, and we have a balance sheet recession[1]. In this story, it is variations in the supply of credit caused by house prices that are the driving force behind consumption changes. Let’s call this a credit effect.

There is clear US evidence that house price movements were related to changes in borrowing and consumption. That would also be consistent with a wealth effect as well as a credit constraint story, but as we have noted, in aggregate the wealth effect should wash out.

Or should it? Let’s go back to thinking about winners and losers. Suppose you are an elderly individual, who is about to go into some form of residential home. You have no interest in the financial position of your children, and the feeling is mutual. You intend to finance the residential home fees and additional consumption in your final years from the proceeds of selling your house. If house prices unexpectedly fall, you have less to consume, so the impact of lower house prices on your consumption will be both large and fairly immediate. Now think about the person the house is going to be sold to. They will be younger, and clearly better off as a result of having to fork out much less for the house. If they are the archetypal (albeit non-altruistic) intertemporal consumer, they will smooth their additional wealth over the rest of their life, which is longer than the house seller. So their consumption rises by less than the house seller’s consumption falls, which means aggregate consumption declines for some time. This is a pure distributional effect, generated by life-cycle differences in consumption.

In aggregate, following a fall in house prices, the personal sector initially moves into surplus (as the elderly consume less), and then it moves into deficit (as the elderly disappear and the young continue to spend their capital gains). In the very long run we go back to balance. This reasoning assumes that the house buyer is able to adjust to any capital gains/losses over their entire life. But house buyers tend to be borrowers, and are therefore more likely to be credit constrained. So credit effects could reverse the sign of distributional effects.

This is a clear case where micro to macro modelling, of the kind surveyed in the paper by Heathcote, Storesletten and Violante, is useful in understanding what might happen. An example related to UK experience is a paper by Attanasio, Leicester and Wakefield (earlier pdf here). This tries to capture a great deal of disaggregation, and allows for credit constraints, limited (compared to the Barro ideal) bequests and much more, in a partial equilibrium setting where house price and income processes are exogenous. The analysis is only as good as its parts, of course, and I do not think it allows for the kind of irrationality discussed here. In addition, as housing markets differ significantly between countries, some of their findings could be country specific.

Perhaps the most important result of their analysis is that house prices are potentially very important in determining aggregate consumption. According to the model, most movements in UK consumption since the mid-1980s are caused by house price shocks rather than income shocks. In terms of the particular mechanism outlined above, their model suggests that the impact of house prices on the old dominate those on the young, despite credit constraints influencing the latter more. In other words the distributional effect of lower house prices on consumption is negative. Add in a collateral credit effect, and the model predicts lower house prices will significantly reduce aggregate consumption, which is the aggregate correlation we tend to observe.

But there remains an important puzzle which the paper discusses but does not resolve. In the data, in contrast to the model, consumption of the young is more responsive to house price changes than consumption of the old. The old appear not to adjust their consumption following house price changes as much as theory suggests they should, even when theory allows a partial bequest motive. So there remain important unresolved issues about how house prices influence consumption in the real world.



[1] This is like the mechanism in the Eggertsson and Krugman paper, although that paper is agnostic about why borrowing limits fall. They could fall as a result of greater risk aversion by banks, for example.

Saturday, 11 August 2012

Handling complexity within microfoundations macro


In a previous post I looked at a paper by Carroll which suggested that the aggregate consumption function proposed by Friedman looked rather better than more modern intertemporal consumption theory might suggest, once you took the issue of precautionary saving seriously. The trouble was that to show this you had to run computer simulations, because the problem of income uncertainty was mathematically intractable. So how do you put the results of this finding into a microfounded model?

While I want to use the consumption and income uncertainty issue as an example of a more general problem, the example itself is very important. For a start, income uncertainty can change, and we have some evidence that its impact could be large. In addition, allowing for precautionary savings could make it a lot easier to understand important issues, like the role of liquidity constraints or balance sheet recessions.

I want to look at three responses to this kind of complexity, which I will call denial, computation and tricks. Denial is straightforward, but it is hardly a solution. I mention it only because I think that it is what often happens in practice when similar issues of complexity arise. I have called this elsewhere the streetlight problem, and suggested why it might have had unfortunate consequences in advancing our understanding of consumption and the recent recession.

Computation involves embracing not only the implications of the precautionary savings results, but also the methods used to obtain them as well. Instead of using computer simulations to investigate a particular partial equilibrium problem (how to optimally plan for income uncertainty), we put lots of similar problems together and use the same techniques to investigate general equilibrium macro issues, like optimal monetary policy.

This preserves the internal consistency of microfounded analysis. For example, we could obtain the optimal consumption plan for the consumer facing a particular parameterisation of income uncertainty. The central bank would then do its thing, which might include altering that income uncertainty. We then recompute the optimal consumption plan, and so on, until we get to a consistent solution.

We already have plenty of papers where optimal policy is not derived analytically but through simulation.(1) However these papers typically include microfounded equations for the model of the economy (the consumption function etc). The extension I am talking about here, in its purest form, is where nothing is analytically derived. Instead the ingredients are set out (objectives, constraints etc), and (aside from any technical details about computation) the numerical results are presented – there are no equations representing the behaviour of the aggregate economy.

I have no doubt that this approach represents a useful exercise, if robustness is investigated appropriately. Some of the very interesting comments to my earlier post did raise the question of verification, but while that is certainly an issue, I do not see it as a critical problem. But could this ever become the main way we do macroeconomics? In particular, if results from these kinds of black box exercises were not understandable in terms of simpler models or basic intuition, would we be prepared to accept them? I suspect they would be a complement to other forms of modelling rather than a replacement, and I think Nick Rowe agrees, but I may be wrong. It would be interesting to look at the experience in other fields, like Computable General Equilibrium models in international trade for example.

The third way forward is to find a microfoundations 'trick'. By this I mean a set up which can be solved analytically, but at the cost of realism or generality. Recently Carroll has done just that for precautionary saving, in a paper with Patrick Toche. In that model a representative consumer works, has some probability of becoming unemployed (the income uncertainty), and once unemployed can never be employed again until they die. The authors suggest that this set-up can capture a good deal of the behaviour that comes out of the computer simulations that Carroll discussed in his earlier paper.

I think Calvo contracts are a similar kind of trick. No one believes that firms plan on the basis that the probability of their prices changing is immutable, just as everyone knows that one spell of unemployment does not mean that you will never work again. In both cases they are a device that allows you to capture a feature of the real world in a tractable way.

However, these tricks do come at a cost, which is how certain we can be of their internal consistency. If we derive a labour supply and consumption function from the same intertemporal optimisation problem, we know these two equations are consistent with each other. We can mathematically prove it. Furthermore, we are content that the underlying parameters of that problem (impatience, the utility function) are independent of other parts of the model, like monetary policy. Now Noah Smith is right that this contentment is a judgement call, but it is a familiar call. With tricks like Calvo contracts, we cannot be that confident. This is something I hope to elaborate on in a subsequent post. 

This is not to suggest that these tricks are not useful – I have used Calvo contracts countless times. I think the model in Carroll and Toche is neat. It is instead to suggest that the methodological ground on which these models stand is rather shakier as a result of these tricks. We can never write ‘I can prove the model is internally consistent’, but just ‘I have some reasons for believing the model may be internally consistent’. Invariance to the Lucas critique becomes a much bigger judgement call.

There is another option that is implicit in Carroll’s original paper, but perhaps not a microfoundations option. We use computer simulations of the kind he presents to justify an aggregate consumption function of the kind Friedman suggested. Aggregate equations would be microfounded in this sense (there need be no reference to aggregate data), but they would not be formally (mathematically) derived. Now the big disadvantage of this approach is that there is no procedure to ensure the aggregate model is internally consistent. However, it might be much more understandable than the computation approach (we could see and potentially manipulate the equations of the aggregate model), and it could be much more realistic than using some trick. I would like to add it as a fourth possible justification for starting macro analysis with an aggregate model, where aggregate equations were justified by references to papers that simulated optimal consumer behaviour.  

(1) Simulation analysis can make use of mathematically derived first order conditions, so the distinction here is not black and white. There are probably two aspects to the distinction that are important for the point at hand, generality and transparency of analysis, with perhaps the latter being more important. My own thoughts on this are not as clear as I would like.

Wednesday, 25 July 2012

Consumption and Complexity – limits to microfoundations?


One of my favourite papers is by Christopher D. Carroll: "A Theory of the Consumption Function, with and without Liquidity Constraints." Journal of Economic Perspectives, 15(3): 23–45. This post will mainly be a brief summary of the paper, but I want to raise two methodological questions at the end. One is his, and the other is mine.

Here are some quotes from the introduction which present the basic idea:

“Fifteen years ago, Milton Friedman’s 1957 treatise A Theory of the Consumption Function seemed badly dated. Dynamic optimization theory had not been employed much in economics when Friedman wrote, and utility theory was still comparatively primitive, so his statement of the “permanent income hypothesis” never actually specified a formal mathematical model of behavior derived explicitly from utility maximization. Instead, Friedman relied at crucial points on intuition and verbal descriptions of behavior. Although these descriptions sounded plausible, when other economists subsequently found multiperiod maximizing models that could be solved explicitly, the implications of those models differed sharply from Friedman’s intuitive description of his ‘model.’...”

“Today, with the benefit of a further round of mathematical (and computational) advances, Friedman’s (1957) original analysis looks more prescient than primitive. It turns out that when there is meaningful uncertainty in future labor income, the optimal behavior of moderately impatient consumers is much better described by Friedman’s original statement of the permanent income hypothesis than by the later explicit maximizing versions.”

The basic point is this. Our workhorse intertemporal consumption (IC) model has two features that appear to contradict Friedman’s theory:

1)      The marginal propensity to consume (mpc) out of transitory income is a lot smaller than the ‘about one third’ suggested by Friedman.

2)      Friedman suggested that permanent income was discounted at a much higher rate than the real rate of interest

However Friedman stressed the role of precautionary savings, which are ruled out by assumption in the IC model. Within the intertemporal optimisation framework, it is almost impossible to derive analytical results, let alone a nice simple consumption function, if you allow for labour income uncertainty and also a reasonable utility function.

What you can now do is run lots of computer simulations where you search for the optimal consumption plan, which is exactly what the papers Carroll discusses have done. The consumer has the usual set of characteristics, but with the important addition that there are no bequests, and no support from children. This means that in the last period of their life agents consume all their remaining resources. But what if, through bad luck, income is zero in that year. As death is imminent, there is no one to borrow money from. So it therefore makes sense to hold some precautionary savings to cover this eventuality. Basically death is like an unavoidable liquidity constraint. If we simulate this problem using trial and error with a computer, what does the implied ‘consumption function’ look like?

To cut a long (and interesting) story short, it looks much more like Friedman’s model. In effect, future labour income is discounted at a rate much greater than the real interest rate, and the mpc from transitory income is more like a third than almost zero. The intuition for the latter result is as follows. If your current income changes, you can either adjust consumption or your wealth. In the intertemporal model you smooth the utility gain as much as you can, so consumption hardly adjusts and wealth takes nearly all the hit. But if, in contrast, what you really cared about was wealth, you would do the opposite, implying an mpc near one. With precautionary saving, you do care about your wealth, but you also want to consumption smooth. The balance between these two motives gives you the mpc.

There is a fascinating methodological issue that Carroll raises following all this. As we have only just got the hardware to do these kinds of calculation, we cannot even pretend that consumers do the same when making choices. More critically, the famous Freidman analogy about pool players and the laws of physics will not work here, because you only get to play one game of life. Now perhaps, as Akerlof suggests, social norms might embody the results of historical trial and error across society. But what then happens when the social environment suddenly changes? In particular, what happens if credit suddenly becomes much easier to get?

The question I want to raise is rather different, and I’m afraid a bit more nerdy. Suppose we put learning issues aside, and assume these computer simulations do give us a better guide to consumption behaviour than the perfect foresight model. After all, the basics of the problem are not mysterious, and holding some level of precautionary saving does make sense. My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically.

I think the implications of this for microfounded macro are profound. The whole point about a microfounded model is that you can mathematically check that one relationship is consistent with another. To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?

Note that this problem is not due to two of the usual suspects used to criticise microfounded models: aggregation or immeasurable uncertainty. We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known. Instead the source of the problem is simply complexity. I will discuss how you might handle this problem, including a solution proposed by Carroll, in a later post.

Monday, 9 April 2012

Microfoundations and Evidence (1): the street light problem

                One way or reading the microfoundations debate is as a clash between ‘high theory’ and ‘practical policy’. Greg Mankiw in a well known paper talks about scientists and engineers. Thomas Mayer in his book Truth versus Precision in Economics (1993) distinguishes between ‘formalist’ and ‘empirical science’. Similar ideas are perhaps behind my discussion of microfoundations and central bank models, and Mark Thoma’s discussion here.
                In these accounts, ‘high theory’ is potentially autonomous. The problem focused on is that this theory has not yet produced the goods as far as policy is concerned, and asks what economists who advise policy makers should do in the meantime. But the presumption is generally that theory will get there as soon as it can. But will it do so of its own accord? Is it the case that academics are quite good at selecting what the important puzzles are, or do they need others more connected to the data to help them?
                There is a longstanding worry that some puzzles are selected because they are relatively easy to solve, and not because they are important. Like the proverbial person looking under the street light for their keys that they lost somewhere less well lit. This is the subject of this post. A later post will look at another concern, which is that there may be an ideological element in puzzle selection. In both cases these biases in puzzle selection can persist because the discipline exerted by external consistency is weak.
The example that reminded me about this came from this graph.

US Savings Rate


The role of the savings rate in contributing to the Great Recession in the US and elsewhere has been widely discussed. Some authors have speculated on the role that credit conditions might have played in this e.g. Eggertsson and Krugman here, or Hall here. But what about the steady fall in savings from the early 1980s until the recession?
                Given the importance of consumption in macroeconomics, you would imagine there would be a huge literature, both empirical and theoretical, on this. Whatever this literature concluded, you would also imagine that the key policy making institutions would incorporate the results of this research in their models. Finally you might expect any academic papers that used a consumption model which completely failed to address this trend might be treated with some scepticism. OK, maybe I’m overdoing it a bit, but you get the idea. (There has of course been academic work on trying to explain the chart above: a nice summary by Guidolin and Jeunesse is here. My claim that this literature is not as large as it should be is of course difficult to judge, let alone verify, but I’ll make it nonetheless.)
                It would be particularly ironic if it turned out that credit conditions were responsible for both the downward trend and its reversal in the Great Recession. However that is exactly the claim made in two recent papers, by Carroll et al here and Aron et al (published in Review of Income and Wealth (2011), earlier version here), with the later looking at the UK and Japan as well as the US. Now if you think this is obvious nonsense, and there is an alternative and well understood explanation for these trends, then you can stop reading now. But otherwise, suppose these authors are right, why has it taken so long for this to be discovered, let alone be incorporated into mainstream macromodels?
                Well in the discovery sense it has not. John Muellbauer and Anthony Murphy have been exploring these ideas ever since the UK consumption boom of the late 1980s. As I explained in an earlier post, there was another explanation for this boom besides credit conditions that was more consistent with the standard intertemporal model, but the evidence for this was hardly compelling. The problem might be not so much evidence, as the difficulty in incorporating credit effects of this kind into standard DSGE models. Even writing down a tractable microfounded consumption function that incorporates these effects is difficult, although Carroll et al do present one. Incorporating it into a DSGE model would require endogenising credit conditions by modelling the banking sector, leverage etc . This is something that is now beginning to happen largely as a result of the Great Recession, but before that it was hardly a major area of research.
                So here is my concern. The behaviour of savings in the US, UK and elsewhere has represented a major ‘puzzle’ for at least two decades, but it has not been a major focus of academic research. The key reason for that has been the difficulty of modelling an obvious answer to the puzzle in terms of the microfoundations approach. John Muellbauer makes a similar claim in this paper. To quote: “While DSGE models are useful research tools for developing analytical insights, the highly simplified assumptions needed to obtain tractable general equilibrium solutions often undermine their usefulness. As we have seen, the data violate key assumptions made in these models, and the match to institutional realities, at both micro and macro levels, is often very poor.”
                I do think microfoundations methodology is progressive. The concern is that, as a project, it may tend to progress in directions of least resistance rather than in the areas that really matter – until perhaps a crisis occurs. This is not really mistaking beauty for truth: there are plenty of rather ugly DSGE macro papers out there, one or two of which I have helped write. It is about how puzzles are chosen. When a new PhD student comes to me with an idea, I will of course ask myself is this interesting and important, but my concern will also be whether the student is taking on something where they can get a clear and publishable result in the time available.
When I described the Bank of England’s macromodel BEQM, I talked about the microfounded core, and the periphery equations that helped fit the data better. If all macroeconomists worked for the Bank of England, then that construct contains a mechanism that could overcome this problem. The forecasters and policy analysts would know from their periphery equations where the priority work needed to be done, and this would set the agenda for those working on microfounded theory.
                In the real world the incentive for most academics is to get publications, often within a limited time frame. When the focus of macroeconomic analysis is on internal consistency rather than external consistency, then it is unclear whether this incentive mechanism is socially optimal. If it is not, then one solution is for all macroeconomists to work for central banks! A more realistic alternative might be to reprise within academic macroeconomics a modelling tradition which placed more emphasis on external consistency and less on internal consistency, to work alongside the microfoundations approach. (Justin Fox makes a similar point in relation to financial modelling.)     

Sunday, 12 February 2012

What have Keynesians learnt since Keynes?

                That is the question asked by RobertWaldman (9th Feb) in a comment on my post, and also in a dialog with Mark Thoma. I’ll not attempt a full answer – that would be much too long – and Mark makes a number of the important points. Instead let me just talk about one episode that convinced me that one part of New Keynesian analysis, the intertemporal consumer with rational expectations, was much more useful than the ‘Old Keynesian’ counterpart that I learnt as an undergraduate.
                In the mid 1980s I was working at NIESR (National Institute for Economic and Social Research) in London, doing research and forecasting. UK forecasting models at the time had consumption equations which included current and lagged income, wealth and interest rates on the right hand side, using the theoretical ideas of Friedman mediated through the econometrics of DHSY (Davidson, J.E.H., D.F. Hendry, F. Srba, and J.S. Yeo (1978). Econometric modelling of the aggregate time-series relationship between consumers' expenditure and income in the United Kingdom.Economic Journal, 88, 661-692.) While the permanent income hypothesis appealed to intertemporal ideas, as implemented by DHSY and others using lags on income to proxy permanent income I think it can be described as ‘Old Keynesian’.
As the decade progressed, UK consumers started borrowing and spending much more than any of these equations suggested. Model based forecasts repeatedly underestimated consumption over this period. Three main explanations emerged of what might be going wrong. In my view, to think about any of them properly requires an intertemporal model of consumption.

1) House prices. The consumption boom coincided with a housing boom. Were consumers spending more because they felt wealthier, or was some third factor causing both booms? There was much macro econometric work at the time trying to sort this out, but with little success. Yet thinking about an intertemporal consumer leads one to question why consumers in aggregate would spend more when house prices rise. (I don’t recall anyone suggesting it changed output supply, but then the UK is not St. Louis.) Subsequent work (Attanasio, O and Weber, G (1994) “The UK Consumption Boom of the Late 1980s” Economic Journal Vol. 104, pp. 1269-1302) suggested that increased borrowing was not concentrated among home owners, casting doubt on this explanation.

2) Credit constraints. In the 1980s the degree of competition among banks and mortgage providers in the UK increased substantially, as building societies became banks and banks starting providing mortgages. This led to a large relaxation of credit constraints. While such constraints represent a departure from the simple intertemporal model, I find it hard to think about how shifts in credit conditions like this would influence consumption without having the unconstrained case in mind.

3) There was also much talk at the time of the ‘Thatcher miracle’, whereby supply side changes (like reducing union power) had led to a permanent increase in the UK’s growth rate. If that perception had been common among consumers, an increase in borrowing today to enjoy these future gains would have been the natural response given an intertemporal perspective. Furthermore, as long as the perception of higher growth continued, increased consumption would be quite persistent.

Which of the second two explanations is more applicable in this case remains controversial -see ‘Is the UK Balance of Payments Sustainable?’ John Muellbauer and Anthony Murphy (with discussion by Mervyn King and Marco Pagano) Economic Policy Vol. 5, No. 11 (Oct., 1990), pp. 347-395 for example. However, I would suggest that neither can be analysed properly without the intertemporal consumer. Why is this a lesson for Keynesian analysis? Well in the late 1980s the boom led to rising UK inflation, and a subsequent crash.  Underestimating consumption was not the only reason for this increase in inflation – Nigel Lawson wanted to cut taxes and peg to the DM – but it probably helped.
So this episode convinced me that it was vital to model consumption along intertemporal lines. This was a central part of the UK econometric model COMPACT that I built with Julia Darby and Jon Ireland after leaving NIESR in 1990. (The model allowed for variable credit constraint effects on consumption.) The model was New Keynesian in other respects: it was solved assuming rational expectations, and it incorporated nominal price and wage rigidities.
As I hope this discussion shows, I do not believe the standard intertemporal consumption model on its own is adequate for many issues. Besides credit constraints, I think the absence of precautionary savings is a big omission. However I do think it is the right starting point for thinking about more complex situations, and a better starting point than more traditional approaches.
One fascinating fact is that Keynes himself was instrumental in encouraging Frank Ramsey to write "A Mathematical Theory of Saving" in 1928, which is often considered as the first outline of the intertemporal model. Keynes described the article as "one of the most remarkable contributions to mathematical economics ever made, both in respect of the intrinsic importance and difficulty of its subject, the power and elegance of the technical methods employed, and the clear purity of illumination with which the writer's mind is felt by the reader to play about its subject. " (Keynes, 1933, "Frank Plumpton Ramsey" in Essays in Biography, New York, NY.)  I would love to know whether Keynes ever considered this as an alternative to his more basic consumption model of the General Theory, and if he did, on what grounds he rejected it.