tag:blogger.com,1999:blog-2546602206734889307.post2795851328332225286..comments2023-12-01T11:39:53.054+00:00Comments on mainly macro: Consumption and Complexity – limits to microfoundations?Mainly Macrohttp://www.blogger.com/profile/09984575852247982901noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-2546602206734889307.post-82765193613192993102012-07-27T06:51:37.331+00:002012-07-27T06:51:37.331+00:00"To take a very simple example, we can check ..."To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?"<br /><br />I'm not sure what the problem is; the computer makes everything consistent with the microfoundations put in (if programmed correctly). If your consumption function is millions of points, the computer could discretize that, make it a step function, and one with very fine steps if you have hundreds of thousands or millions of points.Richard H. Serlinhttps://www.blogger.com/profile/09824966626830758801noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-35530649546902751892012-07-27T06:46:11.509+00:002012-07-27T06:46:11.509+00:00I should add that the complaint about verifying nu...I should add that the complaint about verifying numerical accuracy is a lot more true for some work than others. Global optimization, for example, can be a huge problem with a complicated objective function.Richard H. Serlinhttps://www.blogger.com/profile/09824966626830758801noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-67384372230039328732012-07-27T06:40:18.053+00:002012-07-27T06:40:18.053+00:00"My point is that the resulting consumption f..."My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically."<br /><br />If they make a big deal about this, I really suspect it's for ulterior motives. You get a consumption function. Do 1 million runs, or often even 10,000, and you get a fantastic histogram, you get, for all practical purposes, a completely precise function. It may not be simple and pretty, but it's the microfoundation utility function.<br /><br />What is an issue is the details of these simulations. From my experience and knowledge (simulation in finance and Bayesian econometrics), when you submit a paper that's used simulation, no one sees the computer programs, no one sees the initial values, and burn-in, etc. It's so easy to be way way off due to problems with these, but with the current reward/punishment system of the journals, no referee, (or team of referees), will ever spend the enormous time to study these well. <br /><br />When you get into simulation and global optimization this stuff is huge, but from what I've seen never verified, and never a reward for spending gigantic time to learn how to do it right and then do it right (as opposed to much simpler, faster, and often way way off). The numerical part is a whole career right there, not just some side thing that you dispatch as quickly as you can because the gatekeepers give no reward for it, and severe penalty for spending a lot of time at it. Ideally economists would routinely team up with professional numerical mathematicians to help with this.<br /><br />And duplicating research for veracity? That's the scientific method in the hard sciences, pretty much never in economics – because the gatekeepers at the journals and department heads give pretty much zero reward for this very time consuming work.Richard H. Serlinhttps://www.blogger.com/profile/09824966626830758801noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-72710961429078125342012-07-27T02:42:34.056+00:002012-07-27T02:42:34.056+00:00"We are talking about deriving the optimal co..."We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known." I find it hard to accept the last phrase, i.e., "the probability distributions of uncertainty involved are known." What do you have in mind here?Anonymoushttps://www.blogger.com/profile/11677815746117897839noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-21856653327152345752012-07-26T20:08:05.793+00:002012-07-26T20:08:05.793+00:00Maybe. But I won't say that it "makes no ...Maybe. But I won't say that it "makes no sense aggregated up". This is a bunch of individuals acting perfectly rationally and consistently, and what you get is simply their aggregate behavior. This makes a lot of sense to me. And as I said, it's perfectly feasible to put these issues in a macro model. One simply has to go all Krusell and Smith (1998) on it (with some additional quirks). But I've done it many times myself, and its not really a problem ... only a bit tedious.<br /><br />By the way, I just read a fascinating little report from the St Louis Fed. The authors take equity- and house prices as given, and ask: How would a rational agent decide on consumption given these movements in prices. The answer can be seen here: <br /><br />http://research.stlouisfed.org/publications/es/12/ES_2012-07-13.pdf<br /><br />There are very few details with respect to their computations, but to me it looks like they used perfect foresight. And the fit is extremely good.pontushttps://www.blogger.com/profile/08321967966569420139noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-65946082543083409982012-07-26T17:22:17.957+00:002012-07-26T17:22:17.957+00:00Maybe that's it. If you can't impose econo...Maybe that's it. If you can't impose economy-wide feasibility constraints across simulated single agents you end up with a microfounded model that makes no sense aggregated up. So it's an aggregation problem?Luis Enriquehttps://www.blogger.com/profile/09373244720653497312noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-79375121671480604242012-07-26T16:28:48.296+00:002012-07-26T16:28:48.296+00:00In a deterministic, finite horizon optimal control...In a deterministic, finite horizon optimal control version of the intertemporal consumer's problem, the terminal transversality condition generally has you running the state variable down to zero at exactly time T. When you shift over to a stochastic control problem, you can think in terms of income uncertainty and survival uncertainty. In general survival uncertainty causes you to discount the future more heavily. The easiest way to think about income (or wealth) uncertainty is in terms of increasing the endpoint target for wealth. In many ways, the greatest value of computer simulations is in that they let you plot out trajectories which are difficult to show using (qualitative) phase diagrams.BSFhttp://cocktailpartyeconomics.com/blogs/noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-14727166489667314502012-07-26T14:18:15.725+00:002012-07-26T14:18:15.725+00:00I don't get the consistency thing either. If y...I don't get the consistency thing either. If you want to, you could solve for joint consumption a labor decisions simultaneously. These will then, by definition, be consistent with each other. Then you can simulate your 10000 agents, and calculate average behavior. Of course, average consumption behavior might not be consistent with average labor behavior (if you get what I mean), but on the individual level it will. And I don't see a problem with that.pontushttps://www.blogger.com/profile/08321967966569420139noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-86472091262229722112012-07-26T14:06:21.164+00:002012-07-26T14:06:21.164+00:00aha, thanks v much. That makes sense. Note to self...aha, thanks v much. That makes sense. Note to self: stop commenting on blogs before reading the paper under discussion. <br /><br />still don't get the consistency thing - for each single agent, if you were jointly modelling consumption and labor supply decisions, these decisions would be consistent. I think.Luis Enriquehttps://www.blogger.com/profile/09373244720653497312noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-728737077832353682012-07-26T13:50:38.002+00:002012-07-26T13:50:38.002+00:00This doesn't kill microfoundations! Rather it ...This doesn't kill microfoundations! Rather it explores the full extent of microfoundations! Quite the opposite.pontushttps://www.blogger.com/profile/08321967966569420139noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-45178987407590163832012-07-26T13:48:53.378+00:002012-07-26T13:48:53.378+00:00Luis,
To compute the policy function in Figure 1 ...Luis,<br /><br />To compute the policy function in Figure 1 in Carroll's paper takes only a couple of seconds on a laptop. And as you notice, Dynamic Programming is the relevant method exploited here. However, as you can see from the picture, the MPC is very different at different wealth levels. Poor individual's have an MPC close to one, while wealthier have an MPC close to zero. So to calculate the "average MPC" we must know how many agents have low wealth, and how many have high wealth. Carroll accomplishes this by simulating a large number of agents, and using the "stationary distribution" as a measure of "how many" agents there are of each type. Then he calculates the average MPC. This take a little bit longer than a few seconds (but not much more).<br /><br />To bring this idea into a macro model, one would have to use the tools developed by Krusell and Smith. It's definitely doable, but takes some more time.pontushttps://www.blogger.com/profile/08321967966569420139noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-15859312991863419552012-07-26T13:10:25.659+00:002012-07-26T13:10:25.659+00:00JB
I don't understand. One of the alternative...JB<br /><br />I don't understand. One of the alternatives to mainstream macro is computable agent based modelling. Afaic, this has all sorts of heterodox-friendly characteristics like emergent behavior, no equilibrium assumptions, and if I understand the word correctly, true "complexity". But your computable agent can be microfounded, at least as well as standard macro agents are.Luis Enriquehttps://www.blogger.com/profile/09373244720653497312noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-33383676158285700612012-07-26T12:57:59.092+00:002012-07-26T12:57:59.092+00:00Delighted to read this! Complexity kills microfoun...Delighted to read this! Complexity kills microfoundations. Glad you're getting it!JBhttps://www.blogger.com/profile/02900465024382591965noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-16889401024452914472012-07-26T12:15:21.552+00:002012-07-26T12:15:21.552+00:00Chris,
yep ... that's the 'iteration'...Chris, <br /><br />yep ... that's the 'iteration' in value function iteration - I just understood something else by the word simulation, which to me would mean doing what you just described 1000s of times. But as I said, not important, just semantics.Luis Enriquehttps://www.blogger.com/profile/09373244720653497312noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-28792119383304955672012-07-26T12:12:16.679+00:002012-07-26T12:12:16.679+00:00To solve for the consumption function numerically ...To solve for the consumption function numerically you must choose consumption and investment(tomorrows capital stock) based on the current state of your wealth(todays capital stock). The level of investment you choose today will determine your consumption choices tomorrow. When your consumption/investment choice today maximises your utility AND is consistent with maximal utility tomorrow then the problem is solved. But to see this you need pick the best level of consumption today and see what it implies for tomorrows problem. You repeat (= "thousands of computer simulations") this process until the problem is soved. There is no Monte Carlo, you just loop over this problem until you find a fixed point in the value function (that fixed point is the consumption function).Chrisnoreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-76834780759747963002012-07-26T11:50:36.870+00:002012-07-26T11:50:36.870+00:00I don't think I follow you.
I only vaguely k...I don't think I follow you. <br /><br />I only vaguely know what I am talking about, and plan to learn more, so forgive me if I've muddled things up, but I thought that if you have a model that requires numerical simulation because analytical solutions are not available, that's where dynamic programming comes in. But I didn't think you needed to do anything like a Monte Carlo (which is what 1000s of simulations sounds like to me) - I just thought you solved the model once, via whatever method you want (say value function iteration). <br /><br />But I guess the "number" of simulations is not important - whatever the answer is, it's computationally intensive. <br /><br />More important is your point about checking consistency. Here's what I don't understand about that: If the model which you solve numerically includes both consumption and labour supply decisions, both of which are based upon so-called microfoundations, feasibility constraints and so forth, then assuming you haven't screwed up the coding, then the solutions (policy functions that show how consumption and labour supply are chosen in different states of the world) are going to be consistent with each other by construction, aren't they?Luis Enriquehttps://www.blogger.com/profile/09373244720653497312noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-81448972860336810112012-07-26T11:23:53.118+00:002012-07-26T11:23:53.118+00:00"But if the former comes from thousands of co..."But if the former comes from thousands of computer simulations, how can we do this?"<br /><br />Hmmm. An even bigger computer, which simulates consumption and labour supply decision simultaneously??<br /><br />But then we need to add in all the other decisions too.<br /><br />(Ironically, IIRC, Friedman used to get criticised by the Keymesians for not laying out a formal macro model so they could check whether what he was saying was internally consistent. Not precisely "where are your microfoundations?", but close.)Nick Rowehttps://www.blogger.com/profile/04982579343160429422noreply@blogger.comtag:blogger.com,1999:blog-2546602206734889307.post-7638408934365012362012-07-26T02:55:05.685+00:002012-07-26T02:55:05.685+00:00If economists could read graphs they would save th...If economists could read graphs they would save themselves a lot of heartburn.<br /><br />See the savings rate go up during recessions in http://research.stlouisfed.org/fred2/series/PSAVERT?cid=112Philiphttps://www.blogger.com/profile/16538860062019540619noreply@blogger.com