In a recent post I suggested one microfoundations based argument for what Blanchard and Fischer call useful models, and I call aggregate models. Both Mark Thoma and Paul Krugman picked up on it, and I want to respond to both. While this will mostly be of interest to economists, I have not tagged this ‘for economists’ because I know from comments that some non-economists who think about the philosophy of (social) science find this interesting. If you are not in either category, this post is probably not for you.
Paul Krugman first. He makes a number of relevant points, but the bit I like best in his post is where he says “Wren-Lewis is on my side, sort of”. I really like the ‘sort of’. Let me say why.
In one sense I am on his side. I do not believe that the one and only way to think about macroeconomics is to analyse microfounded macromodels. I think too many macroeconomists today think this is the only proper way to do analysis, and this leads to a certain microfoundations fetishism which can be unhelpful. Aggregate models without microfoundations attached can be useful. On the other hand I really do not want to take sides on this issue. Most of the work I have done in the last decade has involved building and analysing microfounded macromodels, and I’ve done this because I think it is a very useful thing to do. Taking sides could too easily degenerate into a ‘for and against’ microfoundations debate – in such a debate I would be on both sides. I certainly do not agree with this: “So as I see it, the whole microfoundations crusade is based on one predictive success some 35 years ago; there have been no significant payoffs since.” The justification for aggregate models that I gave in my previous post was deliberately four square within the microfoundations methodology because I wanted to convince, not antagonise. So ‘sort of’ suits me just fine.
What I am against is what I have called elsewhere the ‘microfoundations purist’ position. This is the view that if some macroeconomic behaviour does not have clear microfoundations, then any respectable academic macroeconomist cannot include it as part of a macromodel. Why do I think this is wrong? This brings me to Mark Thoma, who linked my piece with one he had written earlier on New Old Keynesians. Part of that piece describes why economists might, at least temporarily, forsake microfounded models in favour of a ‘useful’ (to use Blanchard and Fischer’s terminology) model from the past. To quote
“The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation....”
“So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide.”
I think this identifies a second reason why an aggregate model – a model without explicit microfoundations – might be preferred to microfounded alternatives, which Paul Krugman also covers in his point (3). This has to do with the speed at which microfoundations macro develops.
Developing new microfounded macromodels is hard. It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimisation problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behaviour elsewhere, and so we need to go back to the drawing board. This internal consistency criteria is partly what gives these models their strength.
It is very important to do all this, but it takes time. It takes even longer to convince others that this innovation makes sense. As a result, the development of microfounded macromodels is a slow affair. The most obvious example to me is New Keynesian theory. It took many years for macroeconomists to develop theories of price rigidity in which all agents maximised and expectations were rational, and still longer for them to convince each other that some of these theories were strong enough to provide a plausible basis for Keynesian type business cycles.
A more recent example, and one more directly relevant to Mark Thoma’s discussion, is the role of various financial market imperfections in generating the possibility of a financial crisis of the type we have recently experienced. There is a lot of important and fascinating work going on in this area: Stephen Williamson surveys some of it here. But it will take some time before we work out what matters and what does not. In the meantime, what do we do? How should policy respond today?
To answer those questions, we will have to fall back on models that contain elements that appear ad hoc, by which I mean that they do not as yet have clear and widely accepted microfoundations. Those models may contain elements discussed by past economists, like Keynes or Minsky, who worked at a time before the microfoundations project took hold. Now microfoundations purists would not (I would hope) go so far as to say that kind of ad hoc modelling should not be done. What they might well say is please keep it well away from the better economic journals. Do this ad hoc stuff in central banks, by all means, but keep it out of state of the art academic discourse. (I suspect this is why such models are sometimes called ‘policy models’.)
This microfoundations purist view is a mistake. It is a mistake because it confuses ‘currently has no clear microfoundations’ with ‘cannot ever be microfounded’. If you could prove the latter, then I would concede that – from a microfoundations perspective – you would not be interested in analysing this model. However our experience shows that postulated aggregate behaviour that does not have a generally accepted microeconomic explanation today may well have one tomorrow, when theoretical development has taken place. New Keynesian analysis is a case in point. Do the purists really want to suggest that, prior to 1990 say, no academic paper should have considered the implications of price stickiness?
So here I would suggest is a second argument for using aggregate (or useful, or ad hoc) models. Unlike my first, it allows these models not to have any clear microfoundations at present. Such analysis should be respected if there is empirical evidence supporting the ad hoc aggregate relationship, and if the implications of that relationship could be important. In these circumstances, it would be a mistake for academic analysis to have to wait for the microfoundations work to be done. (This idea is discussed in more detail in “Internal consistency, price rigidity and the microfoundations of macroeconomics” Journal of Economic Methodology (2011) Vol. 18, 129-146 - earlier version here.).
As a visitor from Mars, may I raise the big problem many outsiders feel about the microfoundations project? This is simply that the psychological assumptions made for individual behaviour are so untrue that the cognitive gain for the macro model is negative. Friedman and Keynes were after all working with simple Durkheimian models of collective behaviour - the consumption function, permanent income - which had the enormous merits of being both intuitively plausible and testable. They are therefore far superior to strange and untrue stories about perfectly informed and rational agents.ReplyDelete
Now I'm sure that things have been improving recently on the micro front, with Stiglitz's asymmetric information and Kahneman's cognitive biases. Have they improved enough, to the point where a kosher micro foundation is a selling point to those of us not heavily invested in the homo economicus myth?
I don't want to sound like a broken record and I do want to be at least 1% as polite to you as you have been to me, but you do not explain at all why you disagree with Krugman's claim that the one and only success of the advocates of microfoundations is the crtique of the Phillips curve (which critique can be found in "The General Theory ..." even though writing more than two decades earlier Keynes did not cite Phillips).ReplyDelete
To go back where I managed to prevent myself from publicly going, when I asked for a case in which New Keynesian theory had developed an idea which is both emprically useful and not described by Keynes, you didn't do so. First you discussed ideas not embodied in "old Keynesian" models. Those models, being models, involved radical simplifications. They did not incorporate the vast majority of Keynes's thoughts. I think modelling is useful and applaud the old Keynesian effort to explain Keynes to a computer. But your answer did not address my question.
Second you ended up not with a new Keynesian prediction which was confirmed but with a discussion of avenues for future research which you assert shall be fruitful and new Keynesian -- a prediction of future useful predictions not an example of any empirical success in the past. One of the avenues -- considering how changes in the business strategies of banks has affected households' liquidity constraints -- has, you claimed, not been addressed at all with new Keynesian models. The other -- belief in a Thatcher boom -- did not prove ( again according to your assessment) to be easily reconciled with the rational expectations hypothesis.
I repeat my challenge.
What would you consider to be the archetypal "policy model"?ReplyDelete
A few thoughts from an outsider who long ago learnt macro from Blanchard and Fischer, and found the talk of eclecticism in Ch 10 quite refreshing.ReplyDelete
1. Would you say that the Gas laws cannot exist because they are not the properties of a "representative molecule"? Or that the distinction between diamond and graphite is just a figment of the imagination, because in terms of the "representative carbon atom" they are identical? Obviously not. Yet this is an inevitable implication of the microfoundations project as launched by Lucas.
2. Even if you ignore the previous point, the fact remains that "In the aggregate, the hypothesis of rational behavior has in general no implications." (Kenneth Arrow) (Also Ch 17.E MWG).
How would you defend the representative consumer or the aggregate production function? These concepts are typically incoherent. Yet, you defend microfoundations on grounds of internal consistency.
3. The microfoundations project was all about trying to do macro in an Arrow-Debreu economy. There is no theoretical or empirical justification for this, as has been repeatedly pointed out be GE theorists.
"Although I never believed it when I was young and held scholars in great respect, it does seem to be the case that ideology plays a large role in economics. How else to explain Chicago’s acceptance of not only general equilibrium but a particularly simplified version of it as ‘true’ or as a good enough approximation to the truth? Or how to explain the belief that the only correct models are linear and that the von Neuman prices are those to which actual prices converge pretty smartly? This belief unites Chicago and the Classicals; both think that the ‘long-run’ is the appropriate period in which to carry out analysis. There is no empirical or theoretical proof of the correctness of this. But both camps want to make an ideological point. To my mind that is a pity since clearly it reduces the credibility of the subject and its practitioners."
[ General Equilibrium: Problems and prospects
edited by Fabio Petri and Frank Hahn ]
4. And don't get me started on the Lucas critique, whose relevance, which is clearly an empirical issue, is treated as a matter of faith.
What I am questioning is *not* the attempt to explain high level aggregates in terms of lower level concepts - which I agree is essential - but the insistence that doing macro in a modified Arrow-Debreu framework as the One True Way of doing macro. Or indeed, the very possibility of doing macro this way.
Interesting discussion Simon, I have put up something linking back to your earlier post on this.ReplyDelete
Are you familiar with the following sources on this:
Hartley, James E (1996) – The Origins of the Representative Agent, The Journal of Economic Perspectives, Vol. 10, No. 2 Spring 1996 pp. 169-177
Hartley, James E (1997) – The Representative Agent in Macroeconomics, Routledge, Frontiers of Political Economy (1997)
Kirman, Alan P (1992) – Whom or What Does the Representative Individual Represent? Journal of Economic Perspectives volume 6 no. 2 Spring 1992 pp. 117-136
Especially Hartley's book is a good read and offers a very relevant critique I think. I will read your paper recently published in the Journal of Econ Methodology.
Might one add a few more references?Delete
* Frank Fisher (MIT) on the aggregate production function and the CCC:
Aggregation in Production Functions: What Applied Economists Should Know.
Felipe, Jesus and Fisher, Franklin M.,
Metroeconomica, Vol. 54, pp. 208-262, May 2003.
* Kenneth Arrow's Ely lecture (1994):
Methodological Individualism and Social Knowledge
* A practitioner's critique of microfounded DSGEs:
The Emperor Has No Clothes
by James Morley [Macroeconomic Advisers]
Thanks Herman, I will have a look at these.Delete
I am very much enjoying your blog.ReplyDelete
"I know from comments that some non-economists who think about the philosophy of (social) science find this interesting." I do not just find this kind of thing interesting -- I find it necessary. I am a macro trader and I need to know which economists to listen to. During the crisis I quickly worked out that Krugman was worth listening to, and that many people were not (I eventually came to the rule of thumb: views based on DSGE model => ignore; I was pleased to find out later that Larry Summers had a similar view). I used this knowledge to make money. Policy-makers could have used it to make better decisions, but they didn't seem to know enough about economics to judge (yes, George Osborne, I'm talking about you). A thoughtful explanation of the background to, and basis of, various economists' opinions is immensely valuable to traders and policy-makers alike. It allows us to choose our teachers wisely.
For what it is worth, your article sparked a thought of my own: from this article it seems to me that economists are very much thinking within a microfoundations paradigm, and that is because the Lucas critique has got them very worried about epistemology. Perhaps this is about incentives: if you are paid for intellectual consistency then that is what you will deliver. I am paid to make money in the markets; I am not interested in the most epistemically-justifiable theory, but in what it is most reasonable to think given that I have to have a view. I like to tell people that this means I use a confidence level of 50% for my opinions (partly as a justification for why they change so easily!). Perhaps macroeconomics would be more socially useful if more macroeconomists thought in a similar way.
I see three potential reasons to prefer models without empirical validity:ReplyDelete
1. Basic research that is solely done to create the most elegant models possible, without worrying about being applied,
2. A financial incentive is provided by the Cato Institute and others to push an ideological agenda, and so the scholars are driven by the personal pursuit of wealth, and
3. Ayn Rand cultism and a fundamentalist quasi-religious belief in the power of the free market.
Based on your post, I guess you believe in reason 1, that these economists just care about creating internally consistent models, without the need for real-world applicability. But I have always been interested in the role of 2 and 3. What role do you think these play in explaining this behavior?
"It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimisation problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behaviour elsewhere, and so we need to go back to the drawing board. This internal consistency criteria is partly what gives these models their strength."ReplyDelete
It is hard then, in part, because you are trying to fit a square peg into a round hole.
You're trying to fit perfect optimizing behavior of individuals ("internal consistency") to the behavior of an aggregate that did NOT, in fact, result from perfect optimizing behavior of individuals. It resulted from very imperfect optimization of very imperfect individuals, with very limited expertise, information, time for analysis, and self-discipline, to name a few.
"It took many years for macroeconomists to develop theories of price rigidity in which all agents maximised and expectations were rational…"ReplyDelete
Again, square peg, round hole. It's very hard to find a model where every single person has perfect maximization and perfect rational expectations, and you still get the type of aggregate behavior we see in the real world, because that aggregate behavior we see in the real world is not generated from individuals who all have perfect maximization and perfect rational expectations, not even close.
The microfoundations project, like the search for a cancer cure, is not to be discouraged. Meanwhile, why is micro-based macro prescription not accorded the same respect as cancer quackery?ReplyDelete
There is a middle ground: dynamic modeling of interacting sectors with algorithmic representations of how the sectors interact. cf. Steve Keen. Much more similar to how weather modeling is done. They don't model every butterfly.ReplyDelete
Why is there only one person, to my knowledge, who is doing this? We need multiple such models, as in weather prediction, so you can look at them all, hold up your thumb, and squint.
There are some more people like Keen outside.ReplyDelete
Check this link: