Winner of the New Statesman SPERI Prize in Political Economy 2016


Monday 26 March 2012

Microfoundations and Central Bank models

                When the need for internal theoretical consistency (microfoundations) and external empirical consistency (matching the data) conflict, what do you do? You might try and improve the theory, but this can take time, so what do policy makers do in the meantime? 
                There are two ways to go. The first is to stick with microfounded models, and just deal with their known inadequacies in an ad hoc manner. The second is to adapt the model to be consistent with the data, at the cost of losing internal consistency. Central banks used to follow the second approach, for the understandable reason that policymakers wanted to use models that as far as possible were consistent with the data.
                However in the last decade or two some central banks have made their core macromodels microfounded DSGE models. I have not done a proper survey on this, but I think the innovator here was the Bank of Canada, followed by the Reserve Bank of New Zealand. About ten years ago I became heavily involved as the main external consultant in the Bank of England’s successful attempt to do this, which led to the publication in 2004/5 of the Bank’s Quarterly Model (BEQM, pronounced like the well known English footballer). I think it is interesting to see how this model operated, because it tells us something about macroeconomic methodology.
                If we take a microfounded model to the data, what we invariably find is that the errors for any particular aggregate relationship are not just serially correlated (if the equation overpredicts today, we know something about the error it will make tomorrow) but also systematically related to model variables. If the central bank ignores this, it will be throwing away important and useful information. Take forecasting. If I know, say, that the errors in a microfounded model’s equation for consumption are systematically related to unemployment, then the central bank could use this knowledge to better predict future consumption.
                BEQM addressed this problem by splitting the model into two: a microfounded ‘core‘, and an ad-hoc ‘periphery’. The periphery equation for consumption would have the microfounded model’s prediction for consumption on the right hand side, but other variables like unemployment (and lags) could be added to get the best fit with the data. However this periphery equation for consumption would not feed back into the microfounded core. The microfounded core was entirely self-contained: to use a bit of jargon, the periphery was entirely recursive to the core.
                Now at first sight this seems very odd. If the periphery equation for consumption was giving you your best prediction, surely you would want that to influence the core model’s predictions for other variables. However, to do this would destroy the internal consistency of the core model.
Let us take the hypothetical example of consumption and unemployment again. In the core model unemployment does not directly influence consumption over and above its influence on current and future income. We have found from our periphery equation that we can better explain consumption if it does. (Unemployment might be picking up uncertainty about future income and precautionary saving, for example.) However, we cannot simply add unemployment as an extra variable in the core model’s equation for consumption without having a completely worked out microfounded story for its inclusion. In addition, we cannot allow any influence of unemployment on consumption to enter the core model indirectly via a periphery equation, because that would destroy the theoretical integrity (the internal consistency) of the core model. So the core model has to be untouched by the ad hoc equations of the periphery.
                So this core/periphery structure tries to keep our microfounded cake, but also eat from the additional knowledge provided by the data using the periphery equations. Now the research goal is to eventually get rid of these periphery equations, by improving the microfounded model. But that takes time, so in the meantime we use the periphery equations as well. The periphery equations utilise the information provided by the statistical properties of the errors made by the microfounded model.
                I think this core/periphery structure does nicely illustrate the dilemma faced by policy making institutions. They want to follow current academic practice and use microfounded models, but they also want to use the information they have about the limitations of these models. The core/periphery structure described here can be criticised, because as I suggested this information is not being used efficiently without feedback to the core . However is there a better way of proceeding? Would it be better to compromise theory by changing the model so that it follows the data, which in BEQM’s case would merge core and periphery?
                It is sometimes suggested that this is a conflict between forecasting and policy analysis. The example involving consumption and unemployment was chosen to show that this is not the case. The data suggests that the microfounded model is missing something, whether we are forecasting or analysing policy, and the question is what we do while we figure out exactly what is missing. Do we continue to use the wrong model, confident in the knowledge that the stories we tell will at least be consistent, albeit incomplete? Or do we try and patch up the model to take account of empirical evidence, in a way that will almost certainly be wrong once we do figure out properly what is going on?
                What has this to do with academic macroeconomics? Perhaps not much for the macroeconomist who builds microfounded DSGE models and is not that involved in current policy. Microfounded model building is a very important and useful thing to do. For the profession as a whole it does matter, because the central banks that now use microfounded DSGE models do so because that is how policy is analysed in the better journals. Academic macroeconomists therefore have some responsibility in advising central banks how they deal with the known empirical inadequacies of those models. When the data tells us the model is incomplete, how do you best use that information?

13 comments:

  1. Thank you for another excellent post.

    "Do we continue to use the wrong model, confident in the knowledge that the stories we tell will at least be consistent, albeit incomplete? Or do we try and patch up the model to take account of empirical evidence, in a way that will almost certainly be wrong once we do figure out properly what is going on?"

    The latter. Why would you try to make an actual real-world decision with a model that you know is not applicable to the question at issue? (Indeed, why not just use a random-number generator and save yourself the trouble of building a model?) The tendency of economists to do that was a cause of great obfuscation during the crisis. I suppose people will say "at least it tells you something." But false information is worse than no information because people will factor false information into their decisions even if they know it to be false.

    I see a parallel with technical analysis. I often encounter people who believe in technical analysis; when I tell them I have tested most of the quantifiable indicators and found that they don't work, and that the unquantifiable features they imagine to be predictive appear in randomly-generated price series, they still say "at least it tells you something." No, it doesn't. It doesn't tell you anything at all. There is not point whatsoever in adding non-predictive inputs to a process that is supposed to be predictive. My point above about using a random-number generator was quite serious. If something does not have an edge, you do not gain an edge by adding it to your process -- you might as well add a random element.

    "What has this to do with academic macroeconomics? Perhaps not much for the macroeconomist who builds microfounded DSGE models and is not that involved in current policy."

    This is a very dangerous way of thinking. Every person who uses an inappropriate model to generate false beliefs, or who does not have a good grasp of the justification for, and limits of, a model he is building is a danger to society. The only way we advance is to win as many people as possible over to true beliefs, and every person who is not won over is a potential soldier on the side of falsehood. No economist should see himself as a humble DSGE mechanic; every economist should take personal responsibility for the economic project as a whole.

    Otherwise, what will you be? Just another self-sustaining academic discipline (like literary criticism or the daft attempt to render ordinary language into formal logic that passes for modern philosophy) that everyone else regards with a mixture of amusement and contempt.

    ReplyDelete
  2. But, of course, incongruence with data could also, in a more Popperian way, be interpreted as the models not only being INCOMPLETE, but outright WRONG.

    That was also what Nobel laureate Robert Solow basically told us already back in 2008 - in “The State of Macroeconomics” (Journal of Economic Perspectives 2008:243-249):

    "[When modern macroeconomists] speak of macroeconomics as being firmly grounded in economic theory, we know what they mean … They mean a macroeconomics that is deduced from a model in which a single immortal consumer-worker-owner maximizes a perfectly conventional time-additive utility function over an infinite horizon, under perfect foresight or rational expectations, and in an institutional and technological environment that favors universal price-taking behavior …

    No one would be driven to accept this story because of its obvious “rightness”. After all, a modern economy is populated by consumers, workers, pensioners, owners, managers, investors, entrepreneurs, bankers, and others, with different and sometimes conflicting desires, information, expectations, capacities, beliefs, and rules of behavior … To ignore all this in principle does not seem to qualify as mere abstraction – that is setting aside inessential details. It seems more like the arbitrary suppression of clues merely because they are inconvenient for cherished preconceptions …

    Friends have reminded me that much effort of ‘modern macro’ goes into the incorporation of important deviations from the Panglossian assumptions … [But] a story loses legitimacy and credibility when it is spliced to a simple, extreme, and on the face of it, irrelevant special case. This is the core of my objection: adding some realistic frictions does not make it any more plausible than an observed economy is acting out the desires of a single, consistent, forward-looking intelligence …

    It seems to me, therefore, that the claim that ‘modern macro’ somehow has the special virtue of following the principles of economic theory is tendentious and misleading … The other possible defense of modern macro is that, however special it may seem, it is justified empirically. This strikes me as a delusion …

    So I am left with a puzzle, or even a challenge. What accounts for the ability of ‘modern macro’ to win hearts and minds among bright and enterprising academic economists? … There has always been a purist streak in economics that wants everything to follow neatly from greed, rationality, and equilibrium, with no ifs, ands, or buts … The theory is neat, learnable, not terribly difficult, but just technical enough to feel like ‘science’. Moreover it is practically guaranteed to give laissez-faire-type advice, which happens to fit nicely with the general turn to the political right that began in the 1970s and may or may not be coming to an end."

    Having seen what these microfounded models have contributed in way of understanding, explaining and thwarting the latest financial and econonomic crisis, it's hard not to agree with Solow in his condemnation of mirofoundations.

    ReplyDelete
    Replies
    1. Prof. Syll,

      If I understood you correctly there is a ciclical component in your argument. You see, let's suppose we completely abandon the effort in microfundamenting macro-models. We will have in our hands a set of models that will work in some real world situations and will not work in others. Do you agree?
      The problem is that whenever this is established people will start studying why does it happens and it means that they will make an attempt to understand why economic agents behave this way in this situation and that way in that situation.
      In this context, it seems likely that micro elements such as representative agents maximizing utility will apear as a part of the "hard core" knowledge. Which means that we will be facing then the very same problems we are facing now without a solution at hand.
      Did I get you wrong?

      Delete
    2. Eduardo,

      There actually are many resons why new-classical, real business cycles, dynamic stochastic general equilibrium and new-Keynesian micro-founded macromodels are such bad substitutes for real macroeconomic analysis. And I don't think it has anything to do with "cyclicity".

      Contrary to the view you seem to hold, I'm strongly critical of the way these models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And - worse still -something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972), Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

      Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the Sonnenschein-Mantel-Debreu theorem points to. Representative agent models are rather an evasion whereby issues of distribution, coordination, heterogeneity - everything that really defines macroeconomics – are swept under the rug.

      So, until microfounders have been able to tell me how they have coped with – not evaded – Sonnenschein-Mantel-Debreu, I can't see their approach as even theoretically consistent

      Of course, most macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds – not so little – of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there!

      (For someone that is interested in what a real microeconomist thinks of microfoundations, I recommend alan Kirman's "Whom or What Does the Representative Individual Represent?" in Journal of Economic Perspectives 1992 p. 117-136.)

      Delete
  3. Part of the problem you address is institutional. For some reason the (macro-) economics profession is lacking a properly institutionalized "applied research" or "policy" branch with its own goales, own rules of conduct and own professional pride.

    Today, every macroeconomist is supposed to be a "physicist", nobody "engineer". Every macroeconomic question is supposed to do be addressed from a "micro-founded" fundamental research perspective. But you do not ask a physicist to build a bridge or a ship or a car. You ask an engineer. Because building bridges, ships or cars requires different skills, training and experience than those the typical physicist has. In particular, it requires a broad "consumer's knowledge" (David Colander) of physics - which is what we require an engineer to have.

    Central banking is the macroeconomics equivalent to building bridges, ships or cars. It is the prototype "applied research" activity in macro. The ultimate goal of central bank macroeconomists must be to build models that are useful for controlling inflation and for assessing financial stability. Models that do not fit the data proberly are not up to that task.

    Pure or fundamental research in macro is, of course, important. And I agree with earlier posts of you in that it has brought the field forward over the last three decades - although the advancement is probably smaller than the current mainstream is ready to accept. From a division of labour perspective, however, there is no reason why central bank staff should pursue this activity. Why not let them concentrate on the engineering questions while academics proper tackle the "pure" problems? But for having this division of labour, the profession would need to accept that there is an "applied" branch of macro with its own right to exist.

    ReplyDelete
    Replies
    1. You also don't ask a physicist to answer closed system questions from open-system-founded perspective.

      Delete
  4. William Peterson27 March 2012 at 18:30

    The description of the core=periphery approach leaves one unresolved puzzle. Suppose we have two distinct expenditure components of GDP, each with a 'core' equation which is micro-founded and a distinct 'periphery' equation which is data-coherent (ie errors which are not serially correlated, and orthogonal to other data variables). Presumably (since they are part of a consistent core model) the core predictions satisfy the relevant accounting identity. But there is no guarantee that the 'periphery' predictions will do so: and since they are post-recursive to the core model I don't see how they can incorporate the feedbacks which would ensure this.

    On another point (replying to Carsten-Patrick Meier) I think the reasosn why macroeconomists want to be physicists rather than engineers is that micro-founded DSGE modelling is an internationally accepted intellectual activity, whereas building empirically relevant applied models is by its nature a more parochial activity. Hence it is unlikely to get published in top-rank journals.

    ReplyDelete
  5. Strikes me as a way of creating policy to try and alter the world to fit the model - regardless of the effects that has on real people.

    The Aztecs ended up with human sacrifices trying to please their 'Gods' with that thought process.

    ReplyDelete
    Replies
    1. pretty humane compared to what we do isn't it?

      Delete
  6. As far as I'm concerned

    * We do have this fully estimated, consistent, stock/flow model of the macro economy called the National Accounts. DSGE models are a very far cry from the empirical wealth and internal consitency of this glorious system. That's modern, scientific economics.
    * The idea that a whole sector like households can be modelled using a 'well behaved' indifference curve seems, however, to be quite wrong, as for one reason the Arrow paradox applies to this endeavour. Even if (which isn't the case) the behaviuor of individual households could be described by such a curve you can't add them without running into inconsistencies. You might state that it's an 'emergent property' of the sector households. But can somebody finally try to explain to me how to estimate this 'emergent utility' which,at best (!) only has a nominal and not even an ordinal scale?

    A real science uses, through a process of trial and error, sound concepts which which are the basis of practical definitions of variables which serve as the foundation of operationalisations which are used to enable measurement. That's how, for instance, National Accounting works. Indifference curves (and yes, I've read Say, Becker, Arrow, Samuelson, Lucas, Varian and textbooks) lack a proper concept (Samuelson is right on that, read his noble price lecture and his seminal 1937 article), do not have a proper definition (stating that they can be described by a Cobb-Douglas function is not a proper definition) and proper operationalisations are lacking (how do you definie 'one Bentham of utility'?). The guy to read is Viktor Lamme, a Dutch neurologists who uses MRI scans to estimate how people, after the choice (!), use a certain well identified part of the pre frontal neo-cortex he calls the chatterbox to tell themselves that their inconsistent choices were consistent after all, even if they weren't. The indifference curve clearly is a construct of this chatterbox - and not an empirical pnenomenon, be it on the individual or on the aggregate level.

    Merijn Knibbe

    ReplyDelete
  7. I think business cycle accounting (BCA)of Chari et aland its extension by Harrison (from the BOE) and Caldara (now at the Fed) to link the wedges in BCA to other variables that currently part of the non-core is the best reconciliation of these tensions. The wedges can be linked to more detailed models of credit constraints, imperfect competition, imperfect insurance against idiosyncratic risk and even excessive optimism/pessimism. The problem is that we are currently incapable of writing down and solving a model that simultanously contains realistic search and matching frictions in product and labour markets, imperfect insurance, credit constraints and imperfect information. As long as this remains the case, we must accept that shocks in our models are really proxies for othe missing channels to which agents in the model somehow should react to (unlike in the core/non core approach where we ignore the interactions). p.s: while these ideas were initially applied to RBC models, they are perfectly applicable in interpreting the meaning of shocks in something like the smets/wouters model and its central bank offsprings. p.s no 2: I'm sick and tired of hearing criticisms of representative agent modeling when the shocks in rep agent models can be often linked to mechanisms in heterogenous agent dsge models that are missing other elements and when the amount of heterogeneity in you run of the mill cowles comission style model is minimal at best (and I don't see any sign of heterogeneity in VAR's and their CVAR and VECM cousins except perhaps in the imagination of some of their users?). A real science eventually must deal with the possibility that the most important things will may not be observable or measurable except very imperfectly- hence the importance of state space models and filtering (for the record I think economics and other social "sciences" are more like a more coherent and systmatic analysis of the issues- not sure if the definition of scientific method from physics would ever apply...). As for the links between utility theory and neurology, I thought the go to guy was Glimcher at NYU, who actually has a sketch of how the brain encodes information for decision making. I'm sure we'd all love to use that stuff in our models, but it will may take some decades until we get something in a form that can be smoothly integrated into a macroeconomic model (again, simply writing down some behavioural relations in the old cowles style comission or a var wouldn't capture any of this anyways). And, now to some brainless late night tv.

    ReplyDelete
  8. This comment has been removed by a blog administrator.

    ReplyDelete
  9. I note that you provide neither evidence nor reasoning in support of the claim that Microfounded model building is a very important and useful thing to do.". Notably the core-periphery hybrid model is vulnerable to the Lucas crtique. What does the core model add which is useful ? Having a model which is not confronted with the data written down adds what exactly ? Does the core-periphery model fit the data better than a model which is not based on adding inconsistent equations to a logically consistent core ?

    You note someone's (King's I assume) determination to include a logically consistent core ( then add inconsistent equations before making predictions or guiding policy. As presented, this seems to be a matter of intellectual fashion. Evidence doesn't appear in that stage of your story (as it does in the stage of adding ad hoc corrections to thecore model).

    I don't know where to put this, but Mankiw's use of the word " scientists" has no connection with practice in the natural sciences in which, historically, theories bow to facts. Can he explain why he did't write "mathematicians" ?

    Finally, what basis is there for the claim that the "better journals" are better than other journals ? They are presented as journals publishing social science, but empirical success is not required. You note correctly that they have huge status in the field, but do not address the question of whether this has anything to do with science, reality, understanding, insight or intellectual progress.

    Oh I am getting rude as usual. So, before hitting publish, I want to thank you on behalf of at least UK residents for not sticking with an elegant model which does not fit the data. As I understand this post you et al rendered a false but fashionable model harmless. This made the UK a better place and I applaud your intellectual courage.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.