Winner of the New Statesman SPERI Prize in Political Economy 2016


Wednesday 25 July 2012

Consumption and Complexity – limits to microfoundations?


One of my favourite papers is by Christopher D. Carroll: "A Theory of the Consumption Function, with and without Liquidity Constraints." Journal of Economic Perspectives, 15(3): 23–45. This post will mainly be a brief summary of the paper, but I want to raise two methodological questions at the end. One is his, and the other is mine.

Here are some quotes from the introduction which present the basic idea:

“Fifteen years ago, Milton Friedman’s 1957 treatise A Theory of the Consumption Function seemed badly dated. Dynamic optimization theory had not been employed much in economics when Friedman wrote, and utility theory was still comparatively primitive, so his statement of the “permanent income hypothesis” never actually specified a formal mathematical model of behavior derived explicitly from utility maximization. Instead, Friedman relied at crucial points on intuition and verbal descriptions of behavior. Although these descriptions sounded plausible, when other economists subsequently found multiperiod maximizing models that could be solved explicitly, the implications of those models differed sharply from Friedman’s intuitive description of his ‘model.’...”

“Today, with the benefit of a further round of mathematical (and computational) advances, Friedman’s (1957) original analysis looks more prescient than primitive. It turns out that when there is meaningful uncertainty in future labor income, the optimal behavior of moderately impatient consumers is much better described by Friedman’s original statement of the permanent income hypothesis than by the later explicit maximizing versions.”

The basic point is this. Our workhorse intertemporal consumption (IC) model has two features that appear to contradict Friedman’s theory:

1)      The marginal propensity to consume (mpc) out of transitory income is a lot smaller than the ‘about one third’ suggested by Friedman.

2)      Friedman suggested that permanent income was discounted at a much higher rate than the real rate of interest

However Friedman stressed the role of precautionary savings, which are ruled out by assumption in the IC model. Within the intertemporal optimisation framework, it is almost impossible to derive analytical results, let alone a nice simple consumption function, if you allow for labour income uncertainty and also a reasonable utility function.

What you can now do is run lots of computer simulations where you search for the optimal consumption plan, which is exactly what the papers Carroll discusses have done. The consumer has the usual set of characteristics, but with the important addition that there are no bequests, and no support from children. This means that in the last period of their life agents consume all their remaining resources. But what if, through bad luck, income is zero in that year. As death is imminent, there is no one to borrow money from. So it therefore makes sense to hold some precautionary savings to cover this eventuality. Basically death is like an unavoidable liquidity constraint. If we simulate this problem using trial and error with a computer, what does the implied ‘consumption function’ look like?

To cut a long (and interesting) story short, it looks much more like Friedman’s model. In effect, future labour income is discounted at a rate much greater than the real interest rate, and the mpc from transitory income is more like a third than almost zero. The intuition for the latter result is as follows. If your current income changes, you can either adjust consumption or your wealth. In the intertemporal model you smooth the utility gain as much as you can, so consumption hardly adjusts and wealth takes nearly all the hit. But if, in contrast, what you really cared about was wealth, you would do the opposite, implying an mpc near one. With precautionary saving, you do care about your wealth, but you also want to consumption smooth. The balance between these two motives gives you the mpc.

There is a fascinating methodological issue that Carroll raises following all this. As we have only just got the hardware to do these kinds of calculation, we cannot even pretend that consumers do the same when making choices. More critically, the famous Freidman analogy about pool players and the laws of physics will not work here, because you only get to play one game of life. Now perhaps, as Akerlof suggests, social norms might embody the results of historical trial and error across society. But what then happens when the social environment suddenly changes? In particular, what happens if credit suddenly becomes much easier to get?

The question I want to raise is rather different, and I’m afraid a bit more nerdy. Suppose we put learning issues aside, and assume these computer simulations do give us a better guide to consumption behaviour than the perfect foresight model. After all, the basics of the problem are not mysterious, and holding some level of precautionary saving does make sense. My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically.

I think the implications of this for microfounded macro are profound. The whole point about a microfounded model is that you can mathematically check that one relationship is consistent with another. To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?

Note that this problem is not due to two of the usual suspects used to criticise microfounded models: aggregation or immeasurable uncertainty. We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known. Instead the source of the problem is simply complexity. I will discuss how you might handle this problem, including a solution proposed by Carroll, in a later post.

18 comments:

  1. If economists could read graphs they would save themselves a lot of heartburn.

    See the savings rate go up during recessions in http://research.stlouisfed.org/fred2/series/PSAVERT?cid=112

    ReplyDelete
  2. "But if the former comes from thousands of computer simulations, how can we do this?"

    Hmmm. An even bigger computer, which simulates consumption and labour supply decision simultaneously??

    But then we need to add in all the other decisions too.

    (Ironically, IIRC, Friedman used to get criticised by the Keymesians for not laying out a formal macro model so they could check whether what he was saying was internally consistent. Not precisely "where are your microfoundations?", but close.)

    ReplyDelete
  3. I don't think I follow you.

    I only vaguely know what I am talking about, and plan to learn more, so forgive me if I've muddled things up, but I thought that if you have a model that requires numerical simulation because analytical solutions are not available, that's where dynamic programming comes in. But I didn't think you needed to do anything like a Monte Carlo (which is what 1000s of simulations sounds like to me) - I just thought you solved the model once, via whatever method you want (say value function iteration).

    But I guess the "number" of simulations is not important - whatever the answer is, it's computationally intensive.

    More important is your point about checking consistency. Here's what I don't understand about that: If the model which you solve numerically includes both consumption and labour supply decisions, both of which are based upon so-called microfoundations, feasibility constraints and so forth, then assuming you haven't screwed up the coding, then the solutions (policy functions that show how consumption and labour supply are chosen in different states of the world) are going to be consistent with each other by construction, aren't they?

    ReplyDelete
    Replies
    1. To solve for the consumption function numerically you must choose consumption and investment(tomorrows capital stock) based on the current state of your wealth(todays capital stock). The level of investment you choose today will determine your consumption choices tomorrow. When your consumption/investment choice today maximises your utility AND is consistent with maximal utility tomorrow then the problem is solved. But to see this you need pick the best level of consumption today and see what it implies for tomorrows problem. You repeat (= "thousands of computer simulations") this process until the problem is soved. There is no Monte Carlo, you just loop over this problem until you find a fixed point in the value function (that fixed point is the consumption function).

      Delete
  4. Chris,

    yep ... that's the 'iteration' in value function iteration - I just understood something else by the word simulation, which to me would mean doing what you just described 1000s of times. But as I said, not important, just semantics.

    ReplyDelete
    Replies
    1. Luis,

      To compute the policy function in Figure 1 in Carroll's paper takes only a couple of seconds on a laptop. And as you notice, Dynamic Programming is the relevant method exploited here. However, as you can see from the picture, the MPC is very different at different wealth levels. Poor individual's have an MPC close to one, while wealthier have an MPC close to zero. So to calculate the "average MPC" we must know how many agents have low wealth, and how many have high wealth. Carroll accomplishes this by simulating a large number of agents, and using the "stationary distribution" as a measure of "how many" agents there are of each type. Then he calculates the average MPC. This take a little bit longer than a few seconds (but not much more).

      To bring this idea into a macro model, one would have to use the tools developed by Krusell and Smith. It's definitely doable, but takes some more time.

      Delete
    2. aha, thanks v much. That makes sense. Note to self: stop commenting on blogs before reading the paper under discussion.

      still don't get the consistency thing - for each single agent, if you were jointly modelling consumption and labor supply decisions, these decisions would be consistent. I think.

      Delete
    3. I don't get the consistency thing either. If you want to, you could solve for joint consumption a labor decisions simultaneously. These will then, by definition, be consistent with each other. Then you can simulate your 10000 agents, and calculate average behavior. Of course, average consumption behavior might not be consistent with average labor behavior (if you get what I mean), but on the individual level it will. And I don't see a problem with that.

      Delete
    4. Maybe that's it. If you can't impose economy-wide feasibility constraints across simulated single agents you end up with a microfounded model that makes no sense aggregated up. So it's an aggregation problem?

      Delete
    5. Maybe. But I won't say that it "makes no sense aggregated up". This is a bunch of individuals acting perfectly rationally and consistently, and what you get is simply their aggregate behavior. This makes a lot of sense to me. And as I said, it's perfectly feasible to put these issues in a macro model. One simply has to go all Krusell and Smith (1998) on it (with some additional quirks). But I've done it many times myself, and its not really a problem ... only a bit tedious.

      By the way, I just read a fascinating little report from the St Louis Fed. The authors take equity- and house prices as given, and ask: How would a rational agent decide on consumption given these movements in prices. The answer can be seen here:

      http://research.stlouisfed.org/publications/es/12/ES_2012-07-13.pdf

      There are very few details with respect to their computations, but to me it looks like they used perfect foresight. And the fit is extremely good.

      Delete
  5. Delighted to read this! Complexity kills microfoundations. Glad you're getting it!

    ReplyDelete
    Replies
    1. This doesn't kill microfoundations! Rather it explores the full extent of microfoundations! Quite the opposite.

      Delete
  6. JB

    I don't understand. One of the alternatives to mainstream macro is computable agent based modelling. Afaic, this has all sorts of heterodox-friendly characteristics like emergent behavior, no equilibrium assumptions, and if I understand the word correctly, true "complexity". But your computable agent can be microfounded, at least as well as standard macro agents are.

    ReplyDelete
  7. In a deterministic, finite horizon optimal control version of the intertemporal consumer's problem, the terminal transversality condition generally has you running the state variable down to zero at exactly time T. When you shift over to a stochastic control problem, you can think in terms of income uncertainty and survival uncertainty. In general survival uncertainty causes you to discount the future more heavily. The easiest way to think about income (or wealth) uncertainty is in terms of increasing the endpoint target for wealth. In many ways, the greatest value of computer simulations is in that they let you plot out trajectories which are difficult to show using (qualitative) phase diagrams.

    ReplyDelete
  8. "We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known." I find it hard to accept the last phrase, i.e., "the probability distributions of uncertainty involved are known." What do you have in mind here?

    ReplyDelete
  9. "My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically."

    If they make a big deal about this, I really suspect it's for ulterior motives. You get a consumption function. Do 1 million runs, or often even 10,000, and you get a fantastic histogram, you get, for all practical purposes, a completely precise function. It may not be simple and pretty, but it's the microfoundation utility function.

    What is an issue is the details of these simulations. From my experience and knowledge (simulation in finance and Bayesian econometrics), when you submit a paper that's used simulation, no one sees the computer programs, no one sees the initial values, and burn-in, etc. It's so easy to be way way off due to problems with these, but with the current reward/punishment system of the journals, no referee, (or team of referees), will ever spend the enormous time to study these well.

    When you get into simulation and global optimization this stuff is huge, but from what I've seen never verified, and never a reward for spending gigantic time to learn how to do it right and then do it right (as opposed to much simpler, faster, and often way way off). The numerical part is a whole career right there, not just some side thing that you dispatch as quickly as you can because the gatekeepers give no reward for it, and severe penalty for spending a lot of time at it. Ideally economists would routinely team up with professional numerical mathematicians to help with this.

    And duplicating research for veracity? That's the scientific method in the hard sciences, pretty much never in economics – because the gatekeepers at the journals and department heads give pretty much zero reward for this very time consuming work.

    ReplyDelete
    Replies
    1. I should add that the complaint about verifying numerical accuracy is a lot more true for some work than others. Global optimization, for example, can be a huge problem with a complicated objective function.

      Delete
  10. "To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?"

    I'm not sure what the problem is; the computer makes everything consistent with the microfoundations put in (if programmed correctly). If your consumption function is millions of points, the computer could discretize that, make it a step function, and one with very fine steps if you have hundreds of thousands or millions of points.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.