Winner of the New Statesman SPERI Prize in Political Economy 2016


Thursday 1 March 2012

Microfounded and other Useful Models

                This title harks back to one of the books that have influenced me most: Blanchard and Fischer’s Lectures on Macroeconomics. That textbook was largely in the mould of modern microfounded macroeconomics, but chapter 10 was not, and it was entitled ‘Some Useful Models’. One of their useful models is IS-LM.
                The role of such models in an age where journal papers in macro theory are nearly always microfounded DSGE models is problematic. Paul Krugman has brought this issue to the forefront of debate, starting with his ‘How Did Economists Get It So Wrong?’ piece in 2009. His view has been recently stated as follows: “That doesn’t mean that you have to use Mike’s [Woodford] model or something like it every time you think about policy; by and large, ad hoc models like IS-LM are actually more useful, in my judgment. But you probably do want to double-check your logic using fancier optimization models.”
                This view appears controversial. If the accepted way of doing macroeconomics in academic journals is to almost always use a ‘fancier optimisation’ model, how can something more ad hoc be more useful? Coupled with remarks like ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth’ (from the 2009 piece) this has got a lot of others, like Stephen Williamson, upset. I think there are a lot of strands here, many of which are interesting.
The issue I want to discuss now is very specific. What is the role of the ‘useful models’ that Blanchard and Fischer discuss in chapter 10? Can Krugman’s claim that they can be more useful than microfounded models ever be true? I will try to suggest that it could be, even if we accept the proposition (which I would not) that the microfoundations approach is the only valid way of doing macroeconomics. If you think this sounds like a contradiction in terms, read on. The justification I propose for useful models is not the only (and may not be the best) justification for them, but it is perhaps the one that is most easily seen from a microfoundations perspective.
First we must find a new name for these ‘useful’ models. They are sometimes described as ‘policy models’, but that is not a very good name because microfounded models are also used to analyse policy. Let me call them ‘aggregate models’. I think this term is useful, because the defining characteristic of the models I want to talk about is that they start with aggregate macro relationships. Like an IS curve. Microfounded models start with microeconomics, like an optimising representative consumer. I do not want to call aggregate models ‘ad hoc’, because the meaning of ad hoc is not well defined.
The typical structure of a microfounded model involves two stages. In the first stage the microfoundations are set out, individual optimisation problems solved, and aggregation assumptions made. We set out a particular world, perhaps a unique world, in all the detail required for the task in hand. This first stage may also include deriving an aggregate welfare function from individual utility. This leads to a set of aggregate relationships. The second stage involves using these aggregate relationships in some way – to find an optimum policy for example. In aggregate models we only have the second stage. A good paper of either type will often go further, and attempt to suggest (perhaps even show) what it is about the aggregate model that gives us the key results of the paper. Let us call this the critical features of the aggregate model.
Put this way, it looks as if microfounded models must be the superior tool – we get more information in the form of the model’s microfoundations. In particular, we establish that at least one microfounded support exists for the aggregate model we use in the second stage. If we start with an aggregate model, it is possible that no such microfounded support exists for that model. If that could be proved, the usefulness of that aggregate model is completely diminished from a microfoundations perspective. A more realistic case is if we cannot for the moment find any potential microfoundation for such an aggregate model (this is what some people mean by ad hoc), or the only microfoundation we can find is a little odd. In that case the usefulness of the aggregate model is highly questionable.
But suppose there is in fact more than one valid microfoundation for a particular aggregate model. In other words, there is not just one, but perhaps a variety of particular worlds which would lead to this set of aggregate macro relationships. (We could use an analogy, and say that these microfoundations were observationally equivalent in aggregate terms.) Furthermore, suppose that more than one of these particular worlds was a reasonable representation of reality. (Among this set of worlds, we cannot claim that one particular model represents the real world and the others do not.) It would seem to me that in this case the aggregate model derived from these different worlds has some utility beyond just one of these microfounded models. It is robust to alternative microfoundations.
                In these circumstances, it would seem sensible to go straight to the aggregate model, and ignore microfoundations. Well, not quite – it would be good to have a sentence referring to the paper that shows at least one of these microfoundations. A classic example of what I have in mind here is Clarida, Gali and Gertler (1999) Journal of Economic Literature. If an aggregate model can be derived from a number of different microfoundations, then we actually appear to restrict the generality of what we are doing by choosing one derivation and then working with this particular microfounded model. I think this possibility is stronger still if we think about the critical features of an aggregate model – the things that generate the results we focus on.
                I suspect it is this robustness aspect of aggregate models that makes them attractive to some macroeconomists. Why restrict yourself to one particular microfoundation – and let’s be honest, waste time going through yet another derivation of Euler equations and the like? Why not go straight to the set of aggregate relationships that contain the critical features we need for the problem at hand?
Now there is a danger in this approach. That sentence referencing the paper where the aggregate model is derived from microfoundations may not be written, and then further along the line it turns out that the aggregate model being used misses out something important because microfoundations were being ignored. Krugman acknowledges that danger in the last sentence of the quote I started with. However, it seems a little strong to suggest that to avoid such mistakes we should always do the full microfoundations thing.
                So my claim is in many ways quite a weak one. Some aggregate models contain critical features that can be derived from a number of different microfoundations. In that situation, it is natural to want to work with these aggregate models, and to describe them as useful. We could even say that they are more useful, because they have a generality that would be missing if we focused on one particular microfoundation. 

23 comments:

  1. Microfoundations would be important if there were clear evidence that they represented the truth. For example, if there had been a series of experiments demonstrating that individuals are rational and make decisions so as to maximize some measurable quantity called utility, it would be important that macro models were consistent with this and the most direct way of ensuring that would be to incorporate rational utility-maximizing households into the model.

    The fact is that there is no such evidence. Microeconomics is not based on empirical evidence, and the approach used in microeconomics has no special claim to the truth. So, leaving aside the fact that the way macroeconomics uses micro (i.e., in a way that many microeconomists don't approve, ignoring aggregation issues) there's no logical reason why macro needs to even be consistent with micro. All that matters is whether the model yields accurate insights and predictions.

    Given several models, each making the same predictions, Occam's razor suggests using the simplest model. Thus, IS/LM may be preferable to New Keynesian models. The fact that the former does not incorporate the fantastical assumptions of the latter should not be regarded as drawback. Nor should the fact that IS/LM can be represented in a simple diagram and uses only elementary maths, whereas New Keynesian models require knowledge of advanced maths. Of course, this makes the NK models *appear* more sophisticated, but it also tends to obfuscate the underlying economics.

    ReplyDelete
    Replies
    1. ecojon, I am no t at the level of the people in here, but I believe that this view has problems because you are essencially working with hidden definitions of your variables and National Accounts seems to be an excelent exemple of this. I mean, the society's consumption must be calculated according to some normative definition. There is simply no other way of doing it. The same happens with other variables like monetary aggregates where you suppose a multiplicatior of the monetary base and intertemporal choice. The same applies to inflation - the indexes composition are microfundamented (to say that people that gains till X minimum salaries had their purchase power changes by Y% this month, means that you have built a consume baslet and ...). Without these considerations the monetary agregates sounds like rubbish.
      The thing is that when you define parameters, you are, by definition, thinking microeconomics. In short, every Macro data is a micro result. Which means that when you take this aside and "go for what works" you are leaving the field of science and entering the field of sourcery. Non parametric models work, ok. I even had a professor that compared one of these models (developed by himself) to the invention of the wheel. But they cannot be called science because the very idea of science presumes the idea of a system integrated of knowledge. What do you guys think?

      Delete
    2. Sorry, typing mistake. Here's the correct verion ....

      ecojon, I am no t at the level of the people in here, but I believe that this view has problems because you are essencially working with hidden definitions of your variables and National Accounts seems to be an excelent exemple of this. I mean, the society's consumption must be calculated according to some normative definition. There is simply no other way of doing it. The same happens with other variables like monetary aggregates where you suppose a multiplicatior of the monetary base and intertemporal choice. The same applies to inflation - the indexes composition are microfundamented (to say that people that gains till X minimum salaries had their purchase power changes by Y% this month, means that you have built a consume baslet and ...). Without these considerations any macro variable sounds like rubbish.
      The thing is that when you define parameters, you are, by definition, thinking microeconomics. In short, every Macro data is a micro result. Which means that when you take this aside and "go for what works" you are leaving the field of science and entering the field of sourcery. Non parametric models work, ok. I even had a professor that compared one of these models (developed by himself) to the invention of the wheel. But they cannot be called science because the very idea of science presumes the idea of a system integrated of knowledge. What do you guys think?

      Delete
    3. " this view has problems because you are essencially working with hidden definitions of your variables "

      Well, if you expose them and they, among other things, say that all people is identical copies of each other (or at least expand their consumption along paralell engel curves) - why bother?

      I mean - no one belives that - it is just a (stupid) intellectual game that you are supposed to play before you are allowed to show the aggregat relationship that you wanted all along.

      If the aggregate relationship happens to correspond to reality - we still do not know exactly why - but we can definitely rule out the particular mechanism described by any current DSGE.

      Delete
  2. I'm reading this from a physics perspective, and the notion that a concept 'must' be microfounded to be valid just seems absurd. Thermodynamics is an extremely powerful tool that doesn't rely on 'microfoundations', though you can construct them if you want. The theories of orbital mechanics don't ask you to work up from the molecular or subatomic structure of the orbiters. And the 'microfoundations' of most thermodynamic systems or molecular structures are far better understood than the microeconomic behaviour.

    In science you should work at whatever level of abstraction is most useful for description and prediction. If your microfounded model is worse at those than the macro model, then suck it up and go away until you have some better micro.

    I really can't stress enough how completely preposterous - and unscientific - a rule that sounds. Am I misunderstanding something?

    ReplyDelete
    Replies
    1. I would note that there was an important reason why macroeconomics pushed further towards microfoundations - the fact that these models were often used for policy, and that the macro structural relationships we observed were in turn dependent on the policy regime.

      The microfoundations provided a way of understanding "why" a given macro result holded - and it is something that needs to be understood before any policy recommendations could be made.

      On of the great points in this post is that there is a myriad of differing microfoundations that can "explain" a given macro phenomenon - in that sense we should be careful arguing from one limited set of microfoundations, and in fact we may get a fuller understanding starting from a higher level of aggregation given how incomplete the potential microeconomic explanations are.

      Delete
    2. derrida derider2 March 2012 at 22:35

      "In science you should work at whatever level of abstraction is most useful for description and prediction."

      Yes, but that's exactly the problem - the move towards microfoundations was generated by large-scale predictive failure of the more aggregate old Keynesian models. And the source of that failure was traced to a stability problem due to recursion - if you don't embody the deep structure (ie microfoundations) in it you cannot predict which relationships between variables will break down when used for control - ie it is not fit for policy purpose.

      None of this is to deny the point that the old modelling approach can still sometimes be useful, but the problems with it were and are real. We really do need something better - though I agree that the inherent failure of representative agent approaches in capturing heterogeneity really matters, so we have not really got something better yet.

      Delete
    3. In my view, the microfoundations, far from providing an understanding of why a given macro result held, are essentially ways of deciding what result you want, and then building a proof based on that desired outcome, using untestable (because they are obviously inherently false) assumptions, that you still assert with no evidence will reflect how the real world works.

      It is how we end up with people still plugging away calling for fiscal austerity. They have decided upon the result that they want (fiscal austerity is good, because that's what their gut says) and then they undertake to find the correct microfoundations that will prove that this is, in fact, the case. The fact that these underlying assumptions become harder and harder to come up with as the real world stubbornly refuses to cooperate doesn't seem to be having much effect...

      Delete
    4. Peel, you are spot on.
      If you want to see what happens when physicists try to answer these questions, check out Ian Wright's "Implicit Microfoundations of Macroeconomics": http://ideas.repec.org/a/zbw/ifweej/7604.html for some rigourous thermodynamic thinking.

      Delete
  3. I am not an economist, but I have formally studied the field some and have fairly extensive academic study in the social sciences and their guiding philosophies. (I've also been paying close attention to the U.S. economy for over 40 years.) It seems to me Eduardo Weisz has it backwards: the teleological approach is a model starting with a micro theory that is questionable at best -- and clearly wrong to many -- and a set of assumptions that includes the apparent notion that the whole can never be greater than the sum of its parts.

    In contrast, while macro certainly is guided by a general theory, it seems to be fundamentally grounded from the beginning in the best economic data we have -- i.e., it is the empirical, and scientific, method. If it doesn't fit the data, it gets tossed or revised. In contrast, micro modeling seems congenitally unable to revise its assumptions about human economic behavior or the role of those assumptions in the modeling process.

    Marrying the two very different approaches seems to be a tall order, and it seems the terrible record of the micro-based models in the recent crisis demonstrates clearly that we are not there yet.

    ReplyDelete
  4. Simon, I think there's a more important advantage aggregate models can have over microfounded.

    This issue is in finance too, where I am ABD. One of the biggest, and certainly loudest critics of microfounded finance models is Robert Haugen. Haugen now runs an investment services firm, but he was previously a professor at UC Irvine, and is #17 on a list of finance's most prolific authors (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355675).

    Haugen's criticism is that the aggregate of very complicated, highly interacted micro behavior can be better understood if you just observe the behavior of that aggregate, rather than trying to understand it and predict it from modeling micro unit behavior and then aggregating up.

    In finance, then, you can understand and model the behavior of financial asset markets just by looking at how those markets behave over time, and creating a model to fit that observed behavior. This will get you a much more accurate, realistic, and useful model, than if you make very simplifying assumptions about individual behavior and interaction so that it's tractable to aggregate up to the market as a whole.

    So, in other words, a model of aggregates can be much more realistic and accurate in describing the behavior of those aggregates because you aren't forced to make extremely unrealistic simplifying assumptions about micro units in order to make aggregating them tractable.

    In Haugen's own words:

    Chaos aficionados sometimes use the example of smoke from a cigarette rising from an ashtray. The smoke rises in an orderly and predictable fashion in the first few inches. Then the individual particles, each unique, begin to interact. The interactions become important. Order turns to complexity. Complexity turns to chaotic turbulence...( "The New Finance" (2004), 3rd Edition, page 122)

    How then to understand and predict the behavior of an interactive system of traders and their agents?

    Not by taking a micro approach, where you focus on the behaviors of individual agents, assume uniformity in their behaviors, and mathematically calculate the collective outcome of these behaviors.

    Aggregation will take you nowhere.

    Instead take a macro approach. Observe the outcomes of the interaction – market-pricing behaviors. Search for tendencies after the dynamics of the interactions play themselves out.

    View, understand, and then predict the behavior of the macro environment, rather than attempting to go from assumptions about micro to predictions about macro...(page 123)

    For more on this see: http://richardhserlin.blogspot.com/2009/04/induction-deduction-and-model-is-only.html

    ReplyDelete
    Replies
    1. Aggregate models FTW!
      (I.e. empirics over fairy tales)

      And so what if they are not valid over the full range of policy space - you should always be carefull when you boldly go where no one has gone before - and I bet that you in 9 out of 10 times would end up in better places than with DSGE anyway.

      Delete
    2. derrida derider2 March 2012 at 22:54

      But the problem is precisely that the aggregate models cannot, in their very nature, tell you what policy space you're in. That's why they're quite good at telling you what you should have done in the past but extremely unreliable at telling you what you should do right now. That's the point of the Lucas critique that made them unfashionable, and (very, very regrettably) it is a valid one. Nature is not always kind.

      It's by no means as simple as "theory versus empirics".

      Delete
    3. The Lucas critique in many cases relies on people having a ridiculous amount of knowledge, expertise, and free time, to have much effect. Sometimes the Lucas effect may be very weak.

      Take a look at surveys of people's knowledge of the governments' budgets and then tell me that it's common for people to accurately, precisely, and regularly adjust their consumption to expected changes in government spending.

      Delete
  5. Professor, have you considered the agent based modeling approach to macro ? Peter Howitt has been doing some work in this area. Unfortunately not enough people are using this methodology to give it some critical mass.

    ReplyDelete
  6. In my 1985 book "Isolation and Aggregation in Economics" (Berlin, Springer) I made the point:

    "Furthermore, macro theories are more general than micro theories in the following sense: Typically the aggregation procedure will not be bijective, since different micro models might lead to the same macro model. Assume that all micro models out of a certain class C lead to the same macro model .... This macro model is more general than any micro model since it refers to the whole class C of micro models." (p. 95).

    The book is available on the Internet:

    http://epub.ub.uni-muenchen.de/3/

    ReplyDelete
  7. As I said in my comment, I am not at your level. You guys know much more than I do on the subject and I see it as an oportunity to learn.

    This said, it seems to me that macro is fundamentaly different from physics and finance because we cannot directly measure input data. I mean, heat or asset prices in stock markets are measured directly while macro variables are measured based in micro assumptions. To determine the difference between aggregates (e.g. consumption and investment) people need to define consumption and investment in the micro level and only than proceeding to measure economic activity. The same happens with inflation, where a basket of goods is defined as relevant and used a s proxy in the construction of inflation indexes.

    It is true that there are several different micro models that can serve as the base for one single macro model. I could not agree more with this. My point is that any macro data available is the fruit of micro definitions.

    I mean, any model must have be internaly consistent. If you define that some variable "X" as equal 3, you simply cannot define the same variable X as different from 3 in your assumptions. My feeling is that it is exactly what non-parametrical models do and the reason I am unconfortable with them as science.

    What do you think about it? I have also heard about those ABM's, specially the work with them in finance that is being held a Jerusalem's Hebrew University but am not realy familiar with them. Can they be a solution to this issue?

    ReplyDelete
    Replies
    1. Sorry, I went to re-read the post and found a mistake ....

      As I said in my comment, I am not at your level. You guys know much more than I do on the subject and I see it as an oportunity to learn.

      This said, it seems to me that macro is fundamentaly different from physics and finance because we cannot directly measure input data. I mean, heat or asset prices in stock markets are measured directly while macro variables are measured based in micro assumptions. To determine the difference between aggregates (e.g. consumption and investment) people need to define consumption and investment in the micro level and only than proceed to run models using those variables. If you take the national accounts, you will see that everything in there is defined in a formal way and those definitions will make sense only if you consider micro relations. The same happens with inflation, where a basket of goods is defined as relevant and used a s proxy in the construction of inflation indexes.

      It is true that there are several different micro models that can serve as the base for one single macro model. I could not agree more with this. My point is that any macro data available is the fruit of micro definitions.

      I mean, any model must have be internaly consistent. If you define that some variable "X" as equal 3, you simply cannot define the same variable X as different from 3 in your assumptions. My feeling is that it is exactly what non-parametrical models do and the reason I am unconfortable with them as science.

      What do you think about it? I have also heard about those ABM's, specially the work with them in finance that is being held a Jerusalem's Hebrew University but am not realy familiar with them. Can they be a solution to this issue?

      Delete
    2. Eduardo,

      regarding the consistency check I have written:

      "If we want to make sure that those aggregate assumptions involve no contradictions, it suffices to present one single example, which can very often be provided by the assumption that all individuals are alike. This seems to
      be the main justification for using the concept of a typical agent" (p. 97)

      http://epub.ub.uni-muenchen.de/3/

      My current view of the inverse aggregation problem (how to interpret the micro background of a given macro model) is explained there. That it works technically is demonstrted in

      http://epub.ub.uni-muenchen.de/2118/

      Delete
  8. I agree with almost everything written here. There are 2 exceptions
    1) It is not possible that there could be an aggregate model such that no microfounded support exists. The assumption that agents are rational utility maximizers is not a testable hypothesis as any conceivable behavior can be derived as utility maximizing. You describe this case as not realistic, but in fact it is impossible.

    2) You consider the case of " strange" microfoundations, but all standard models have strange microfoundations. The defence is that they are models. What is really going on is that some strange assumptions (e.g. managers of firms care only about shareholder value) are standard. The rule in practice is that it is better to base a model on the standard very strong and agreed to be false assumptions than to introduce agreed to be realistic deviations. This is based on a strange preconception of Occam's razor. Including managers who care about workers as well as shareholders (not much about either) fits the micro data on wages much much better, but it is penalised as adding another free parameter ( free only because the fans of micro foundations ignore micro data). This is the ptoblem. -- not suspiscion of strange microfoundations but a limpet like attachment to a particular set of strange micro foundations.

    Finally, I think that the concession for the sake of argument that microfounded models are best has been made much to often. It is a reflex. People who have done it all their a ademic life can't t let go of that which decades ago was conceded for the sake of argument.

    ReplyDelete
  9. Sorry. I made a very definite claim without giving a proof

    Proposition: there is no course of action which does not maximize some conceivable utility function.

    Proof: This is a proof by contradiction so assume that the set of courses of action inconsistent with utility maximization S is not empty.

    Consider conceivable agent A whose sole aim in life is to make economists who like rational utility maximization miserable. A's utility is 1 if he follows a course of action in S and zero otherwise. To
    aximize his utility function he chooses a course of action s which is an element of S. Thus action s is not utility msximizing for any conceivable agent and maximizes A's utility. This is a contradiction so the initial assumption must be false

    QED

    This is not just a joke. I claim it is also a rigorous proof and that the question has been settled.

    ReplyDelete
  10. I think the author needs to clarify what is meant by micro foundations. If he means a general description of how a person might act in response to their economic situation, then it makes far more sense to start with an aggregate model that seems to describe the aggregate reality and derive micro foundations from that (ask the question, why did people do this?) than to assume you have a micro model of how everyone and the universe behaves because it is based upon quantum mechanics.

    It is absolutely absurd. You can do an analysis in reverse starting with a macro model and deriving a model of the predominant actor in the game. All actors are not the same, and so any micro foundational model that assumes people are all the same is outrageous.

    So here's the fundamental arrogance in all of this. Who the heck is anybody to say that any micro foundational model must be of the DSGE type? That is ludicrous.

    If, let's say the LM line, seems to describe how the predominant actors are behaving, specifically what about their behavior that is causing suboptimal conditions (unemployment, strife, anger, political turmoil, waring tendencies), and also points out that they probably aren't going to change their behavior because of some psychological trick, then it seems useful. And if a group of people are saying that the goal is to get them to act differently using psychology, they have already been proven to be living in fantasy.

    But if the goal is to remove the condition that is causing their normal behavior to be suboptimal, then you might have something.

    So it seems pretty clear that there's a completely arrogant assumption here that there is only one way to do micro in a macro context. The micro foundations that start from the bottom up are wrong. The ones that go from the aggregate back to the micro explanation are right.

    ReplyDelete
  11. The elephant in the room is that microfounded macro isn't either. There is no "representative agent" in the economy. And the characteristics of the imaginary representative agent are throwbacks to a pre-Keynesian ideal. What a clever way to evade critique with self-serving "rigor"! What bullshit.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.