Winner of the New Statesman SPERI Prize in Political Economy 2016


Tuesday, 13 March 2012

Microfoundations – is there an alternative?

                In previous posts I have given two arguments for looking at aggregate macroeconomic models without explicitly specifying their microfoundations. (I subsequently got distracted into defending microfoundations against attacks that I thought went too far – as I said here, I do not think seeing this as a two sided debate is helpful.) In this post I want to examine a much more radical, and yet old fashioned idea, which is that aggregate models could use relationships which are justified empirically rather than through microfoundations.  This argument will mirror similar points made in an excellent post by Richard Serlin in the context of finance. Richard also reflected on my earlier posts here. For an very good summary and commentary on recent posts on this issue see the Bruegel blog.
                Before doing this, let me recap on the two previous arguments. The first was that an aggregate model might have a number of microfoundations, and so all that was required was a reference to at least one of those. Thanks to comments, I now know that a similar point was made by Ekkehart Schlicht in Isolation and Aggregation in Economics (1985), Berlin, Heidelberg: Springer Verlag.  (I said at the time that this seemed to me a fairly weak claim, but Noah Smith was not impressed, I think because he felt you should be able to figure out which microfoundation represents reality. Unfortunately I think reality is often too complex to be well represented by just one microfoundation – think of the many good reasons for price rigidity, for example. In these circumstances robustness is important.)
                The second is more controversial. Because microfoundations takes time, an aggregate relationship may not as yet have a clear microfoundation, but it might in the future. If there is strong empirical evidence for it now, academic research should investigate its implications. So, for example, there is some evidence for ‘inflation inertia’: the presence of lagged as well as expected inflation in a Phillips curve. The theoretical reasons (microfoundation) for this are not that clear, but it is both important and interesting to investigate what the macroeconomic consequences of inflation inertia might be.
                This second argument could justify a very limited departure from microfoundations. A macro model might be entirely microfounded except for this one ‘ad hoc’ element. I can think of a few papers in good journals that take this approach. I have also heard macroeconomists object to papers of this kind: to quote one ‘microfoundations must be respected’. It was reflecting on this that led me to use the term ‘microfoundations purist’.
Suppose we deny the microfoundations purist position, and agree that it is a valid to explore ad hoc relationships within the context of an otherwise microfounded model. By valid, I mean that these papers should not automatically be disqualified from appearing in the top journals. If we take this position, then there seems to be no reason in principle why departures from microfoundations of this type should be so limited. Why not justify a large number of aggregate relationships using empirical evidence rather than microfoundations?
                This used to be done back in my youth. An aggregate model would be postulated relationship by relationship, and each equation would be justified by reference to both empirical and theoretical evidence in the literature. Let us call this an empirically based aggregate model. You do not find macroeconomic papers like this in the better journals nowadays. Even if papers like this were submitted, I suspect they would be rejected. Why has this style of macro analysis died out?
                I want to suggest two reasons, without implying that either is a sufficient justification. The first is that such models cannot claim to be internally consistent. Even if each aggregate relationship can be found in some theoretical paper in the literature, we have no reason to believe that these theoretical justifications are consistent with each other. The only way of ensuring consistency is to do the theory within the paper – as a microfounded model does. A second reason this style of modelling has disappeared is a loss of faith in time series econometrics. Sims (1980) argued that standard identification restrictions were ‘incredible’, and introduced us to the VAR. (For an earlier attempt of mine to apply a similar argument to the demise of what used to be called Structural Econometric Models, see here.)
                In some ways I think this second attack was more damaging, because it undercut the obvious methodological defence of empirically based aggregate models. It is tempting to link microfounded models and empirically based aggregate models with two methodological approaches: a deductivist approach that Hausmann ascribes to microeconomics, and a more inductive approach that Marc Blaug has advocated. Those familiar with these terms can skip the next two paragraphs.
                Microeconomics is built up in a deductive manner from a small number of basic axioms of human behaviour. How these axioms are validated is controversial, as are the implications when they are rejected. Many economists act as if they are self evident. We build up theory by adding certain primitives to these axioms (e.g. in trade, that there exist transport costs), and exploring their consequences. This body of theory will explain many features of the world, but not all. Those it does not explain are defined as puzzles. Puzzles are challenges for future theoretical work, but they are rarely enough to reject the existing body of theory. Under this methodology, the internal consistency of the model is all important.
                An inductivist methodology is generally associated with Karl Popper. Here incompatibility with empirical evidence is fatal for a theory. Evidence can never prove a theory to be true (the ‘problem of induction’), but it can disprove it. Seeing one black swan disproves the theory that all swans are white, but seeing many white swans does nothing to prove the theory. This methodology was important in influencing the LSE econometric school, associated particularly with David Hendry. (Adrian Pagan has a nice comparative account.) Here evidence, which we can call external consistency, is all important.
I think the deductivist methodology fits for microfounded models. Internal consistency is the solid rock on which microfounded macromodels stand. That does not of course make it immune from criticism, but its practitioners know where they stand. There are clear rules by which their activities can be judged. To use a term due I think to Lakatos, the microfoundations research programme has a well defined positive heuristic. Microfoundations researchers know what they are doing, and it does bring positive results.
The trouble with applying an inductivist methodology to empirically based aggregate macromodels is that the rock of external consistency looks more like sand. Evidence in macroeconomics is hardly ever of the black swan type, where one observation/regression is enough to disprove a theory. Philosophers of science have queried the validity of the Popperian ideal even in the context of the physical sciences, and these difficulties become much more acute in something as messy as macro.
So I end with a whole set of questions. Is it possible to construct a clear methodology for empirically based aggregate models in macro? If not, does this matter? If there is no correct methodology (we cannot have both complete internal and external consistency at the same time), should good models in fact be eclectic from a methodological point of view? Does the methodological clarity of microfounded macro help explain its total dominance in academia today, or are there other explanations? If this dominance is not healthy, how does it change?

14 comments:

  1. Thank you for another helpful post.

    What is the justification for a microfoundation? That, when added to a complete model, it makes the model fit past data better? Perhaps economists don't think this explicitly, but it is surely implicit in the selection of microfoundations. So then, what is the difference between microfoundations and the epicycles of Ptolemaic cosmology?

    ReplyDelete
  2. Thank you Simon for the cites.

    Really both induction and deduction are useful, as in the physical sciences which use both. Macro observation and whole scale modeling of aggregates is useful (with caution and perhaps adding some smart micro based structure), and so is microfoundations modeling (perhaps adding, with caution, some well established aggregate behavior, and being careful to interpret the model to reality intelligently, not over-literally).

    With inflation, it's important to put in your models the feedback effects (Lucas Critique), because those effects are strong in inflation (But how fast? How much in the short run?). Pretty much everyone has great knowledge of the prices of the things they buy regularly and understands basic percentage increase. But putting into a model perfect knowledge of government budgeting and politics, and the expertise to compensate for it to keep expected consumption smooth, is very unrealistic. That effect in reality will be weak (and slow), so it's very wrong to literally conclude from that to reality that budget deficits don't matter.

    If a micro phenomenon, or axiom, is very strong and very well supported with evidence, it should be included in at least some models. You add important information to the model. This is why I think nonparametric statistics is not always best. Adding a parametric structure sometimes, if it's a good one, is just the same as adding important, highly consequential, well evidenced, information, rather than leaving it out. And it can just make it easier to interpret and use the data well.

    The crucial thing is, though, a model is only as good as its interpretation. And too often, perhaps for political reasons, you see models being interpreted very literally. Even an extremely unrealistic model can teach great lessons if it's interpreted intelligently, which may be far from literally.

    ReplyDelete
    Replies
    1. Richard - thanks for this. I think you are saying that we should be methodologically eclectic: using micro based theory where it seems realistic, but other more data based approaches when it is not. However this leaves a lot down to judgement, which may be why macroeconomists have turned their back on this approach, and instead adopted microfoundations.
      I'm interested in similarities and differences between macro and finance here. Was behavioral finance a way out of a microfoundations straightjacket, and could something similar happen in macro? What impact has the financial crisis had on the discipline? Please point me to anything you have already written on this.

      Delete
    2. Yes, I think we should be flexible, eclectic, and case-by-case to a large extent, otherwise you're just unduly constraining the field's ability to optimize its usefulness to society. Sometimes other approaches, or mixed approaches, can be valuable.

      It does leave a lot to judgment, but really this is not avoidable. If you make the vast, sweeping, extremely unrealistic assumptions you often see in microfounded models, that's a judgment, and a huge one. If you make the assumption that the only models that are valuable for understanding economics and for policy are pure microfounded models, that's a judgment too, and a huge one. You can't avoid judgments; you just want to make sure that the ones you make are well justified, and that your interpretations to reality are intelligent.

      It's like when something is difficult in economics, there's a big tendency to assume it doesn't exist – often, even in the final interpretations to reality. How often do we assume things are zero because they are hard to estimate. Unfortunately, that doesn't get you out of estimating. You've made an estimate; it's just the constant zero, and constants aren't very efficient, consistent, etc. econometric estimators. I talk about this here: http://newmonetarism.blogspot.com/2011/12/world-according-to-frank-part-ii.html?showComment=1323654166723#c840570658857883292

      Also, you hear that if you give economists leeway they can just make up anything. In a post on Noah's blog, I replied to this sentiment this way:

      "If a consumer's marginal benefit from consumption is something other than what is revealed by his or her demand I could just make up anything as a marginal benefit curve..."

      Now, see this is where so many economists get it so wrong. You can't just make up anything and have it pass muster. It not only has to fit the buying behavior, it has to fit so much other data and evidence we have on humans and their world, biology, psychology, sociology (have you read, "The Darwin Economy" from Robert Frank). It's only when you severely limit the data you're willing to admit that you have so little power to narrow down the possibilities. Here's one of my favorite quotes from the great growth economist Paul Romer of Stanford...

      At: http://noahpinionblog.blogspot.com/2012/01/welcome-economist-to-desert-of-real.html?showComment=1326349342594#c1825658116257763525

      Delete
    3. – "Was behavioral finance a way out of a microfoundations straightjacket, and could something similar happen in macro?"

      I think so. Many people knew using these models alone was overly limiting, and that they were often being interpreted to reality over-literally. But, it was hard to just say, hey, the basic paradigm just assumes that people have way too much expertise, education, publicly available knowledge in their heads, and/or time to gather all of this knowledge and analyze it, and the self-discipline to do all of this even if the time existed and was costless. And even if all of this was true, the model would move slowly to the extent that it takes lots of time to do all of this learning.

      Saying, hey, people just don't have all this stuff, not anywhere close, perhaps we can put in parameters for their lack of expertise, lack of education, great lack of publicly available knowledge, all the time and effort it takes to get all this,... Just sounds too unfancy and unstructured, and too against market efficiency – a really dominant paradigm in economics and finance. It's hard to get that into fancy papers, and then published in top journals. But this behavioral stuff sounds fancy; it's in top psych journals, so it can sneak in.

      I worte abont this in a comment on Noah's blog:

      What about just ignorance and asymmetric information in a ridiculously complex world with people worked to death and little time to learn the massive amounts these right wing economists assume. You don't need fancy behavioral explanations. You've got two parents working 50 hours plus, per week, coming home and having to then do housework, and all of the much more intense parenting today...

      At: http://noahpinionblog.blogspot.com/2012/02/why-rational-expectations-models-can-be.html?showComment=1328922909581#c6821547719803011531

      – "What impact has the financial crisis had on the discipline? Please point me to anything you have already written on this."

      Here, unfortunately, it's hard for me to comment. I never finished my dissertation, and for the last five years I spend almost all of my time on our businesses, investments, family, and I teach personal finance at the University of Arizona, but unfortunately personal finance is part of the department of Consumer and Family Sciences. These days, I only do academic economics and finance study and blogging during break time, and occasional free time. I recommend contacting my old chairman and dissertation advisor, Chris Lamoreuux. Very smart, and well respected. And he has a rare high level of broad understanding of the field finance. His main specialization, though, is Bayesian economitrics applied to finance. He'd be happy to talk to you. His contact information is at: http://finance.eller.arizona.edu/faculty/clamoureux.asp

      Delete
  3. I think “micro founded” models would be more useful if they where promoted under some minimum level of intellectual honesty.

    If it is a Robinson Crusoe economy - state that instead of claiming that there is some “representative agent”.

    If there is only one good – call it a one commodity economy.

    Etc.

    If someone then want to take those results and apply them to a multi person/commodity world with specialization, missing markets, dispersed knowledge, externalities, asymmetric information etc. etc. etc., they might do that in the same fashion as they now do – but do not pretend that you have derived some results for such a world. Make it explicit that you go from e.g. the one person/commodity world to the real by fait and fait alone (unless you have some empirical support that it actually works).

    ReplyDelete
  4. PS: @Simon
    I really like your posts, keep up the good work.

    ReplyDelete
  5. Thanks for an interesting post, again, Simon.
    Microfoundations – and a fortiori rational expectations and representative agents – serve a particular theoretical purpose. And as the history of macroeconomics during the last thirty years has shown, this Lakatosian microfoundation programme for macroeconomics is only methodologically consistent within the framework of a (deterministic or stochastic) general equilibrium analysis. In no other context has it been possible to incorporate these kind of microfoundations, with its “forward-looking optimizing individuals,” into macroeconomic models.
    This is of course not by accident. General equilibrium theory is basically nothing else than an endeavour to consistently generalize the microeconomics of individuals and firms on to the macroeconomic level of aggregates.
    But it obviously doesn’t work. The analogy between microeconomic behaviour and macroeconomic behaviour is misplaced. Empirically, science-theoretically and methodologically, neoclassical microfoundations for macroeconomics are defective. Tenable foundations for macroeconomics really have to be sought for elsewhere.
    In your latest post on the subject, you rhetorically asks: "Microfoundations – is there an alternative?" Of course there are alternatives to neoclassical general equilibrium microfoundations, and I have tried to discuss one of them in particular on my blog: http://larspsyll.wordpress.com/2012/03/13/microfoundations-of-course-there-is-an-alternative/
    And as Keynes famously wrote in Treatise on Probability:"The atomic hypothesis which has worked so splendidly in physics breaks down in psychics. We are faced at every turn with the problems of organic unity ... - the whole is not equal to the sum of the parts."

    ReplyDelete
    Replies
    1. Lars - thanks for this. My question was not so much are their alternatives - my post described a way of doing macro that was standard 40 years ago. The question is instead why those alternatives have died out, and why the microfoundations approach is now dominant. I'd be interested in your view on this.

      Delete
    2. Simon - let me try, then, to give an (admittedly tentative) answer.
      (1) One could of course say that one reason why the microfundations approach is so dominant is - as Krugman has it on his blog today - “trying to embed your ideas in a microfounded model can be a very useful exercise — not because the microfounded model is right, or even better than an ad hoc model, but because it forces you to think harder about your assumptions, and sometimes leads to clearer thinking”.But I don't really believe that is an especially important reason on the whole. I mean, if people put that enormous amount of time and energy that they do into constructing macroeconomic models, then they really have to be substantially contributing to our understanding and ability to explain and grasp real macroeconomic processes. If not, they should – after somehow perhaps being able to sharpen our thoughts – be thrown into the waste-paper-basket (something the father of macroeconomics, Keynes, used to do), and not as today, being allowed to overrun our economics journals and giving their authors lots of academic prestige.
      (2) A more plausible reason is that microfoundations is in line with the reductionism inherent in the metohodological individaulism that almost all neoclassical economists subscribe to. And as e.g. argued by Johan Åkerman and Ekkart Schlicht this is deeeply problematic for a macroeconomics trying to solve the "summation problem" without nullifying the possibility of emergence.
      (3)It is thought to give macroeconomists the means to fully predetermine their models and come up with definitive, robust, stable, answers. In reality we know that the forecasts and expectations of individuals often differ systematically from what materialize in the aggregate, since knowledge is imperfect and uncertainty - rather than risk - rules the roost.
      (4) Microfoundations allegedly goes around the Lucas critique by focussing on "deep" structural, invariant parameters of optimizing individuals' preferences and tastes. As I have argued, this is an empty hope without solid empirical or methodological foundation.

      The kind of microfoundations that "new-Keynesian" and new-classical general equilibrium macroeconomists are basing their models on, are not - at least from a realist point of view - plausible.
      As all students of economics know, time is limited. Given that, there has to be better ways to optimize its utilization than spending hours and hours working through or constructing irrelevant macroeconomic models founded on microfoundations more chosen from considerations of mathematical tractability than applying to reality. I would rather recommend my students allocating their time into constructing better, real and relevant macroeconomic models – models that really help us to explain and understand reality.

      Delete
  6. Aren't most of the large macro forecasting models still based, more or less, on the 40-year old method you describe? And if firms and others are still subscribing to these models, what are the implications for the RE approach?

    ReplyDelete
  7. 1. economics deals with messy data and virtually no possibility of experimenting and isolating relationships, so you're never going to get anything like the sort of statistical relationship that makes Popper's theories useful. (Popper may be correct, but hardly falsifiable in this instance, ah ah)

    2. there are two distinct issues that seem to get confounded all too often, one is "building from primitives" as a methodology and the other is "selecting good primitives", and while the elite of the economics profession is getting a lot of stick for the former, it seems to me that the latter is plausibly an equally important issue.

    3. Building from micro upwards may well be a useful methodology, but it doesn't mean that your agent needs to look indefinitely into the future and act rationally at all times, you could have an inattentive agent who half the time acts like the neighbours or like it did in the past. Modern macro is moving that way, thanks to the likes of Robert Shiller and George Akerlof and others. Btw, the Akerlof-Shiller book may not be super well written but it's a lot better than amazon readers think and deserves to be read more than it seems to be.

    4. Using exotic preferences the way the finance literature has done may be commendable in macro, but even if it is, macro does not have the amount of data that finance has to plausibly identify the increased number of parameters that the more complicated/realistic preferences have. Combine three half-decent models that go one or two steps beyond the basic RBC model, can you falsify them? I don't think so.

    5. But isn't the biggest problem not so much the microfoundations (having some optimizing agent and having agents optimize in an approximately realistic way) as the macroedifications (having macro relationships that can be used at all)? Just made this one up, couldn't think of a word. If the microfoundations are based on sound micro (something more complicated than the basic RBC model), properly aggregating the model will be a sorry mess, with more parameters than you have data points, and your microfounded model will be a giant but unsound macroedifice. The loglinearization is just not serious, for sure, but else we quickly run into NP problems...

    6. I mean, suppose for just a minute that a group of your agents don't care at all about the transversality condition and devote their energies to trying to run Ponzi schemes, then you'll need to introduce a police agent, a judge agent, a prison guard agent, etc. to monitor, catch and punish the rogue agents, occasionally letting one slip by. Imagine for a second what kind of model that would be. Do we have enough data to falsify that?

    7. It is implausible that every agent in the economy has a PhD in dynamic programming from Caltech, just as it is implausible that you need to run a Matlab program to monitor your family's finances. Yet, I say to myself, if Lucas is doing it, if Acemoglu is doing it, if Krugman is doing it (with some errors in the math too, on his "Expectations and History" masterpiece), if Robert Waldmann is doing it (I'm thinking about his work on heterogeneity), if Chris Sims, if Tom Sargent, if Peter Diamond, if etc. then I just humbly bow and say there probably is no other way for economic science to move forward, ever so slowly. It may be true that the destruction of the Phillips curve is economics last great conquest. It may be true that the basic IS-LM with no microfoundations works as well in the midst of a terrible financial crisis as the fancier New Keynesian models, as Summers, Krugman and others have said. Still, I don't see an end to the business of DSGE. If we knew a better way forward it would have been followed right? ... ahem

    ReplyDelete
  8. How can everyone ignore the political implications of different approaches?

    ReplyDelete
  9. Terrific post, Simon. I don't think there's any intellectually honest alternative to eclecticism in macro at the moment. I would take it even wider than you suggest and include some completely different modelling approaches as well, including network theory and complexity, with their own empirical methods. (See my post at http://www.enlightenmenteconomics.com/blog/index.php/2012/03/fish-pigeons-humans/)
    How on earth we get all your academic colleagues away from doing and teaching DGSE etc is another matter. My son has recently attended your macro lectures and quite enjoyed them but he still thinks macro is for the birds. Yet I doubt polite student skepticism is enough to engineer change.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.