Tuesday, 15 October 2013

Microfoundations and Macro Wars

There have been two strands of reaction to my last post. One has been to interpret it as yet another salvo in the macro wars. The second has been to deny there is an issue here: to quote Tony Yates: “The pragmatic microfounders and empirical macro people have won out entirely”. If people are confused, perhaps some remarks by way of clarification might be helpful.

There are potentially three different debates going on here. The first is the familiar Keynesian/anti-Keynesian debate. The second is whether ‘proper’ policy analysis has to be done with microfounded models, or whether there is also an important role for more eclectic (and data-based) aggregate models in policy analysis, like IS-LM. The third is about how far microfoundation modellers should be allowed to go in incorporating non-microfounded (or maybe behavioural) relationships in their models.

Although all three debates are important in their own right, in this post I want to explore the extent to which they are linked. But I want to say at the outset what, in my view, is not up for debate among mainstream macroeconomists: microfounded macromodels are likely to remain the mainstay of academic macro analysis for the foreseeable future. Many macroeconomists outside the mainstream, and some other economists, might wish it otherwise, but I think they are wrong to do so. DSGE models really do tell us a lot of interesting and important things.

For those who are not economists, let’s be clear what the microfoundations project in macro is all about. The idea is that a macro model should be built up from a formal analysis of the behaviour of individual agents in a consistent way. There may be just a single representative agent, or increasingly heterogeneous agents. So a typical journal paper in macro nowadays will involve lots of optimisation by individual agents as a way of deriving aggregate relationships.

Compare this to two alternative ways of ‘doing macro’. The first goes to the other extreme: choose a bunch of macro variables, and just look at the historic relationship between them (a VAR). This uses minimal theory, and the focus is all about the past empirical interaction between macro aggregates. The second would sit in between the two. It might start off with aggregate macro relationships, and justify them with some eclectic mix of theory and empirics. You can think of IS/LM as an example of this third way. In reality there is probably a spectrum of alternatives here, with different mixes between theoretical consistency and consistency with the data (see this post).

In the 1960s and 1970s, a good deal of macro analysis in journals was of this third type. The trouble with this approach, as New Classical economists demonstrated, was that the theoretical rationale behind equations often turned out to be inadequate and inconsistent. The Lucas critique is the most widely quoted example where this happens. So the microfoundations project said let’s do the theory properly and rigorously, so we do not make these kind of errors. In fact, let’s make theoretical (‘internal’) consistency the overriding aim, such that anything which fails on these grounds is rejected. There were two practical costs of this approach. First, doing this was hard, so for a time many real world complexities had to be set aside (like the importance of banks in rationing credit, for example, or the reluctance of firms to cut nominal wages). This led to a second cost, which was that less notice was taken of how each aggregate macro relationship tracked the data (‘external’ consistency). To use a jargon phrase that sums it up quite well: internal rather than external consistency became the test of admissibility for these models.

The microfoundations project was extremely successful, such that it became generally accepted among most academics that all policy analysis should be done with microfounded models. However I think macroeconomists are divided about how strict to be about microfoundations: this is the distinction between purists and pragmatists that I made here. Should every part of a model be microfounded, or are we allowed a bit of discretion occasionally? Plenty of ‘pragmatic’ papers exist, so just referencing a few tells us very little. Tony Yates thinks the pragmatists have won, and I think David Andolfatto in a comment on my post agrees. I would like to think they are right, but my own experience talking to other macroeconomists suggests they are not.

But let’s just explore what it might mean if they were right. Macroeconomists would be quite happy incorporating non-microfounded elements into their models, when strong empirical evidence appeared to warrant this. Referees would not be concerned. But there is no logical reason to only include one non-microfounded element at a time: why not allow more than one aggregate equation to be data rather than theory based? In that case, ‘very pragmatic’ microfoundation models could begin to look like the aggregate models of the past, which used a combination of theory and empirical evidence to justify particular equations.

I would have no problem with this, as I have argued that these more eclectic aggregate models have an important role to play alongside more traditional DSGE models in policy analysis, particularly in policy making institutions that require flexible and robust tools. Paul Krugman is fond of suggesting that IS-LM type models are more useful than microfounded models, with the latter being a check on the former, so I guess he wouldn’t worry about this either. But others do seem to want to argue that IS-LM type models should have no place in ‘proper’ policy analysis, at least in the pages of academic journals. If you take this view but want to be a microfoundations pragmatist, just where do you draw the line on pragmatism?

I have deliberately avoided mentioning the K word so far. This is because I think it is possible to imagine a world where Keynesian economics had not been invented, but where debates over microfoundations would still take place. For example, Heathcote et al talk about modelling ‘what you can microfound’ versus ‘modelling what you can see’ in relation to the incompleteness of asset markets, and I think this is a very similar purist/pragmatist microfoundations debate, but there is no direct connection to sticky prices.

However in the real world where thankfully Keynesian economics does exist, I think it becomes problematic to be both a New Keynesian and a microfoundations purist. First, there is Paul Krugman’s basic point. Before New Keynesian theory, New Classical economists argued that because sticky wages and prices were not microfounded, they should not be in our models. (Some who are unconvinced by New Keynesian ideas still make that case.) Were they right at the time? I think a microfoundations purist would have to say yes, which is problematic because it seems an absurd position for a Keynesian to take. Second, in this paper I argued that the microfoundations project, in embracing sticky prices, actually had to make an important methodological compromise which a microfoundations purist should worry about. I think Chari, Kehoe and McGrattan are making similar kinds of points. Yet my own paper arose out of talking to New Keynesian economists who appeared to take a purist position, which was why I wrote it.

It is clear what the attraction of microfoundations purity was to those who wanted to banish Keynesian theory in the 1970s and 1980s. The argument of those who championed rational expectations and intertemporal consumption theory should have been: your existing [Keynesian] theory is full of holes, and you really need to do better – here are some ideas that might help, and let’s see how you get on. Instead for many it was: your theory is irredeemable, and the problems you are trying to explain (and alleviate) are not really problems at all. In taking that kind of position it is quite helpful to follow a methodology where you get rather a lot of choice over what empirical facts you try and be consistent with.

So it is clear why the microfoundations debate is mixed up with the debate over Keynesian economics. It also seems clear to me that the microfoundations approach did reveal serious problems with the Keynesian analysis that had gone before, and that the New Keynesian analysis that has emerged as a result of the microfoundations project is a lot better for it. We now understand more about the dynamics of inflation and business cycles and so monetary policy is better. This shows that the microfoundations project is progressive.

But just because a methodology is progressive does not imply that it is the only proper way to proceed. When I wrote that focusing on microfoundations can distort the way macroeconomists think, I was talking about myself as much as anyone else. I feel I spend too much time thinking about microfoundations tricks, and give insufficient attention to empirical evidence that should have much more influence on modelling choices. I don’t think I can just blame anti-Keynesians for this: I would argue New Keynesians also need to be more pragmatic about what they do, and more tolerant of other ways of building macromodels.  


11 comments:

  1. Sir, I read your analysis with interest. It is something a lot of people have been asking about. I remember this subject coming up as far back as the 1980s. Strange alliances seem to be result - as both Keynes and Friedman seemed to have always maintained that what was important is forecastable accuracy - partial equilibrium models are fine for this purpose, while the New-Classicalists (Monetarist II) argue that what is important is consistency and micro-foundations. Personally I think models in any case should only be a reference point. They should not take up 80 per cent of the the analysis in a paper with perfunctory regard for evidence or historical and other context. If you are trying to find an Einsteinian equation that explains the universe you should be in maths, physics, or philosophy department, not an economics one. Economics is about finding policy solutions, to extreme poverty for example that affects a large proportion of the world's population. Finding policy solutions are not a long term abstract goal, they are urgent,. This is the business of social sciences. If this is not "pure" or "rigorous" or "formal" or "sophisticated" enough for you, you should be somewhere else. A model that can work and can be a usefully applied to policy is should be the ultimate test in this business. How this discipline got hijacked by this crowd (I think it actually started around 1950 with the application of optimisation techniques) would be an interesting investigation - for political philosophers.

    ReplyDelete
  2. When you talk of the 'Keynesian' position on inflation up to the 1970s, that is really the Irving Fisher money illusion position that Keynes adopted (and Akerlof synthesised the stagflation and the money illusion positions in 2000).

    I know that Shiller has pushed his 'baskets' idea for decades (computerized money index-linked to inflation), and it rarely gets a mention in trying to fuse this divide, much to his annoyance.

    I think uber-rationality models (Savage) as much came from the attempt to justify why the monopoly economies of the Soviet era persisted - trying to use rationality on the Holocaust was not part of this new generation's history (Michael Mann uses Weber's 'value rationality' in his Dark Side of Democracy, 2005, to get some explanation of Nazi genocide, in the way that 'rationality' is a collective monomania).

    ReplyDelete
  3. Extremist positions (e.g. RBC or VAR) are always much easier to defend than more moderate ones, therefore you generally encounter a lot of extremism in e.g. academia.

    E.g. almost everyone talking politics in the dorm rooms are extremists. Very few people are extremists, instead almost everyone seem to be very conservative social liberals - but that is a very hard position to defend, so they generally don't enter the discussion.

    ReplyDelete
  4. Great stuff. Very clearly set out.

    I think it is questionable whether microfounded models can ever be usefully extended to encompass real world complexities that really matter. Even if techniques are developed to incorporate such things, more complexity often implies less generality and robustness.

    A far better approach is to combine the strengths of both styles - to use each as a check on the other. I often find it a highly illuminating process to think about why two different modelling styles are producing seemingly conflicting results.

    ReplyDelete
  5. I have no problem with the idea of Micro-foundations even though the logical extension of the idea is both an absurdity and misses the point. Models are supposed to be models, which are simplified representations. An ever more exact replica is in the end useless, impossible and circular as it will just bring you back right where you are, wondering what will happen next.
    But clearly micro-foundations has a time and a place.
    What I would like to see is an interaction term between agents, and an interaction term between agents and expectations. One of the useful things economics can teach us is that 2+2 can equal 5. The greeks knew this when they organized in a Phalanx to beat armies with superior numbers. agents interact and organize all the time so their sum is greater than their parts. Agents do this with the organization of companies, soldiers still do this, and countries do this - we organize (it's called government...) so the sum of all citizens is greater.
    Unfortunately, either to make microfoundations tractable, or to support an idealogical bent, or simply due to lack of imagination, microfounders (on average...) have taken things backwards to a world where government is the problem, and agents don't organize - they balkanize and achieve through the market, only.
    ughh - that ain't progress. It's always easier to deconstruct than to construct. Don't set to easy a task guys. Tell us how to organize, interact to make great countries.
    Dan

    ReplyDelete
  6. Simon,

    Have you read Wynne Godley and Marc Lavoie's 'Monetary Economics' (2007)?

    http://dl4a.org/uploads/pdf/Monetary+Economics+-+Lavoie+Godley.pdf

    ReplyDelete
  7. Models develop usually in steps.
    Like here from the very general to the more and more specific.

    Basically they get more accurate in that process. However usually at the costs of becoming more and more complicated. Not only does it takes more time to do things, but also most people simply lose oversight in what they are doing. And in that process often make a lot of mistakes.

    You can approach things purely academically. If so making mistakes is imho in general not bad. See it merely as a step on the road to wisdom. Bit like if Higgs Boson would at the end be wrong, it still would be extremely useful as it simply looks like a step that was necessary (in the whole process of things) to come to the right conclusion.
    If you have to work in the real world and base important decisions upon it things get however different. Makes new stuff nearly always impossible to sell as well btw.

    Seen from that angle. Micro foundations in macro is just a step into that direction. One can imagine or even expect that a further breaking up of the different factors in future would start another similar process. And go another layer (or two) deeper.
    Sectors in business, lifecycles of business, different social and economic classes of consumers stuff like that.
    Making most likely the collision between better results and more timeconsuming and greater lack of oversight even bigger.

    There is imho nothing wrong fundamentally with having a function behave differently depending on where you are on the traject. It is however simply too complicated (mathematically) for most people to grasp what is happening.
    With wages one could imagine that not only nominal zero is a point where the maths changes. But also real spending income zero for instance probably less but nevertheless one can imagine that employers will face a lot of pressure when their employees face a drop in real spending power. People simply like to put the bill of say rising costs first with somebody else. And donot forget the legal stuff. Simply not allowed to reduce onesided wages and with strong labour protection there is no easy way out of that. Basically what you see in Spain now.

    Basically what you do is using a formula iso an assumption. Where earlier it was a given or assumption if you like.
    It is however good to realise the dynamics hereof. Assumptions are simply put in place often because they are stable. If so no use to complicate things by getting into detail is pretty useless. Allthough it makes you usually look pretty clever in the eyes of many if you do (no joke btw).
    Another issue is that people understand the more complex it gets hard to simplify things again. They got their knowledge by going from step to step. You have to move from step to step back as well. Going half way (between two steps) back to make things easier simply doesnot work. Simply most people lose the plot in that process. Basically caused by the fact that they not truly understand the maths (and subsequently are able to play with it). They simply understand the trick and not more. Not even to mention getting programms adjusted.

    If decisionmaking on short notice is required you often simply need simple stuff. Better for oversight and adjusting when there are last minute changes in the facts.

    Also imputdata can be so inaccurate that getting into detail is just a waist of time.

    Basically however a simple upfront check id assumptions and formulaes used are appropriate fot he thing you want to look at would solve a lot of things. But again you are likley to bump into the problem that the large majority understands only the trick but not really the underlying math or better the formula itself.

    ReplyDelete
  8. But, for micro foundations models to be useful the foundations have to be correct. As Stiglitz and others have persuasively argued, they aren't!

    ReplyDelete
  9. For the lay persons reading, I think its important to try and convey WHY people want good microfoundations. It's not solely a desire for mathematical barriers to entry to the profession or an urge to prove government intervention is always and everywhere a Bad Thing (though of course both motives exist).

    The Lucas Critique has force. If you are going to rely on past relations between aggregate variables as levers to try and alter the future then you have to understand WHY those past relations existed - otherwise you cannot predict what will actually happen when you pull a lever. That is because pulling the lever itself can give incentives to people to behave differently than they did in the past. You need to understand what drives that behaviour, and that's what "microfounded" means. That's the common explanation why both old Keynesian and monetarist approaches failed in the 1970s.

    I've tried to do justice to the microfounders here, but I'm actually less sympathetic than Simon. That's basically for the reason Anonymous, citing Stiglitz, says - there is no reason at all to think you CAN derive reality-fitting aggregate relationships from simple models of individual behaviour, however desirable that is. And in practice when this failure becomes apparent it has been the "reality-fitting" bit too many practitioners have discarded rather than the simple model.

    ReplyDelete
  10. If you can't get to macro from micro-foundations, then you are simply wrong, but you have to get both your macro and micro right. The big debate about QED was that the mathematics suggests infinite sums, but the real world tells us the sums are finite. Renormalization was the path to reconciliation, but some, like Dirac, were never comfortable with the idea.

    Of course, right now, neither micro nor macro are particularly useful.

    ReplyDelete
  11. I think you miss one point. There is a tendendy to not look at deductive and inductive processes in a complementary way. VAR inference is useful when you want to test theoretical predictions made in a deductive way (purist or not). The deductive way is important in order to better understand the sources of the dynamics involved.

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.