Winner of the New Statesman SPERI Prize in Political Economy 2016


Tuesday, 13 March 2012

Microfoundations – is there an alternative?

                In previous posts I have given two arguments for looking at aggregate macroeconomic models without explicitly specifying their microfoundations. (I subsequently got distracted into defending microfoundations against attacks that I thought went too far – as I said here, I do not think seeing this as a two sided debate is helpful.) In this post I want to examine a much more radical, and yet old fashioned idea, which is that aggregate models could use relationships which are justified empirically rather than through microfoundations.  This argument will mirror similar points made in an excellent post by Richard Serlin in the context of finance. Richard also reflected on my earlier posts here. For an very good summary and commentary on recent posts on this issue see the Bruegel blog.
                Before doing this, let me recap on the two previous arguments. The first was that an aggregate model might have a number of microfoundations, and so all that was required was a reference to at least one of those. Thanks to comments, I now know that a similar point was made by Ekkehart Schlicht in Isolation and Aggregation in Economics (1985), Berlin, Heidelberg: Springer Verlag.  (I said at the time that this seemed to me a fairly weak claim, but Noah Smith was not impressed, I think because he felt you should be able to figure out which microfoundation represents reality. Unfortunately I think reality is often too complex to be well represented by just one microfoundation – think of the many good reasons for price rigidity, for example. In these circumstances robustness is important.)
                The second is more controversial. Because microfoundations takes time, an aggregate relationship may not as yet have a clear microfoundation, but it might in the future. If there is strong empirical evidence for it now, academic research should investigate its implications. So, for example, there is some evidence for ‘inflation inertia’: the presence of lagged as well as expected inflation in a Phillips curve. The theoretical reasons (microfoundation) for this are not that clear, but it is both important and interesting to investigate what the macroeconomic consequences of inflation inertia might be.
                This second argument could justify a very limited departure from microfoundations. A macro model might be entirely microfounded except for this one ‘ad hoc’ element. I can think of a few papers in good journals that take this approach. I have also heard macroeconomists object to papers of this kind: to quote one ‘microfoundations must be respected’. It was reflecting on this that led me to use the term ‘microfoundations purist’.
Suppose we deny the microfoundations purist position, and agree that it is a valid to explore ad hoc relationships within the context of an otherwise microfounded model. By valid, I mean that these papers should not automatically be disqualified from appearing in the top journals. If we take this position, then there seems to be no reason in principle why departures from microfoundations of this type should be so limited. Why not justify a large number of aggregate relationships using empirical evidence rather than microfoundations?
                This used to be done back in my youth. An aggregate model would be postulated relationship by relationship, and each equation would be justified by reference to both empirical and theoretical evidence in the literature. Let us call this an empirically based aggregate model. You do not find macroeconomic papers like this in the better journals nowadays. Even if papers like this were submitted, I suspect they would be rejected. Why has this style of macro analysis died out?
                I want to suggest two reasons, without implying that either is a sufficient justification. The first is that such models cannot claim to be internally consistent. Even if each aggregate relationship can be found in some theoretical paper in the literature, we have no reason to believe that these theoretical justifications are consistent with each other. The only way of ensuring consistency is to do the theory within the paper – as a microfounded model does. A second reason this style of modelling has disappeared is a loss of faith in time series econometrics. Sims (1980) argued that standard identification restrictions were ‘incredible’, and introduced us to the VAR. (For an earlier attempt of mine to apply a similar argument to the demise of what used to be called Structural Econometric Models, see here.)
                In some ways I think this second attack was more damaging, because it undercut the obvious methodological defence of empirically based aggregate models. It is tempting to link microfounded models and empirically based aggregate models with two methodological approaches: a deductivist approach that Hausmann ascribes to microeconomics, and a more inductive approach that Marc Blaug has advocated. Those familiar with these terms can skip the next two paragraphs.
                Microeconomics is built up in a deductive manner from a small number of basic axioms of human behaviour. How these axioms are validated is controversial, as are the implications when they are rejected. Many economists act as if they are self evident. We build up theory by adding certain primitives to these axioms (e.g. in trade, that there exist transport costs), and exploring their consequences. This body of theory will explain many features of the world, but not all. Those it does not explain are defined as puzzles. Puzzles are challenges for future theoretical work, but they are rarely enough to reject the existing body of theory. Under this methodology, the internal consistency of the model is all important.
                An inductivist methodology is generally associated with Karl Popper. Here incompatibility with empirical evidence is fatal for a theory. Evidence can never prove a theory to be true (the ‘problem of induction’), but it can disprove it. Seeing one black swan disproves the theory that all swans are white, but seeing many white swans does nothing to prove the theory. This methodology was important in influencing the LSE econometric school, associated particularly with David Hendry. (Adrian Pagan has a nice comparative account.) Here evidence, which we can call external consistency, is all important.
I think the deductivist methodology fits for microfounded models. Internal consistency is the solid rock on which microfounded macromodels stand. That does not of course make it immune from criticism, but its practitioners know where they stand. There are clear rules by which their activities can be judged. To use a term due I think to Lakatos, the microfoundations research programme has a well defined positive heuristic. Microfoundations researchers know what they are doing, and it does bring positive results.
The trouble with applying an inductivist methodology to empirically based aggregate macromodels is that the rock of external consistency looks more like sand. Evidence in macroeconomics is hardly ever of the black swan type, where one observation/regression is enough to disprove a theory. Philosophers of science have queried the validity of the Popperian ideal even in the context of the physical sciences, and these difficulties become much more acute in something as messy as macro.
So I end with a whole set of questions. Is it possible to construct a clear methodology for empirically based aggregate models in macro? If not, does this matter? If there is no correct methodology (we cannot have both complete internal and external consistency at the same time), should good models in fact be eclectic from a methodological point of view? Does the methodological clarity of microfounded macro help explain its total dominance in academia today, or are there other explanations? If this dominance is not healthy, how does it change?

Sunday, 11 March 2012

Rational Expectations and Phillips Curves

              Two small points following up on my previous post on microfoundations.

1) Adopting rational expectations as the default expectations model has never meant (for me at least) ignoring the possibility of non-random expectations errors. As Lars Syll points out, the informational demands of rational expectations are very strong. However, we need to model expectations by some means. What rational expectations allows you to do is think about expectations errors in a structural way. We can think about deviations from rational expectations, just as we can think about shocks to behavioural relationships. The problem with what went before rational expectations (e.g. adaptive expectations) is that expectations errors were built in, and in most situations these built in errors were not terribly plausible.
Hopefully models of learning will eventually allow expectations errors to be analysed in a more plausible, systematic and routine way. I was interested to see that Michael Woodford, in his defence of microfoundations methodology here, nevertheless pinpoints rational expectations as a key weakness, and learning models as the way forward. As my next door neighbour in the department at Oxford works in this area (see for example Ellison, Martin & Pearlman, Joseph, 2011. "Saddlepath learning," Journal of Economic Theory, Elsevier, vol. 146(4), pages 1500-1519), I couldn’t possibly disagree. However I think it is highly unlikely that learning will negate the advances in understanding monetary policy that I referenced in my previous post.
For some variables in some situations, a baseline where expectations were formed in a naive way might be more appropriate. Some aspects of risk maybe? However inflation in a business cycle with an independent central bank is not one of these. David Glasner talks here about the “the tyrannical methodology of rational expectations”. I just do not see it that way. Rational expectations do not prevent us understanding sustained periods of deficient demand when an inflation targeting central bank hits a lower bound. Indeed they help, because with rational expectations inflation targeting prevents inflation expectations delivering the real interest rate we need, as I have argued here.  

2) I talked about both rational expectations and the New Keynesian Phillips curve (NKPC) in providing the theoretical impetus to inflation targeting by independent central banks. A comment asked why I put the two together. The latter goes with the former, because rational expectations with the more traditional Phillips curve imply deviations from the natural rate are random, which is totally destructive of Keynesian theory. (If inflation at time t depends on the output gap and expected inflation at time t - rather than t+1 as in the NKPC - and the difference between actual and expected inflation is a random error because expectations are rational, then the output gap is also a random error.)
The traditional Phillips curve has always seemed to me to be an advertisement for the dangers of not doing microfoundations. It seems plausible enough, which is why it was used routinely before the rational expectations revolution. But it contains the serious flaw noted above, which almost destroyed Keynesian economics. I know this is not realistic, but imagine that Calvo (1983) ‘Staggered prices in a utility maximising framework’ Journal of Monetary Economics Vol 12 pp 383-398 had been published a decade or more earlier, as a direct response to Friedman’s 1968 presidential address. Who knows what would have happened next, but it is difficult to imagine the history of macroeconomic thought being worse as a result.  



Friday, 9 March 2012

Anti-Keynesian Germany

                In previous posts I have focused on the problems of Eurozone current account imbalances and misalignments (differences in competitiveness). In theory this problem need not lead to overall recession in the Euro area, if growth and inflation rose significantly in Germany. They will not, as I point out here, in part because of the impact of the familiar zero lower bound constraint on the ability of the ECB to stimulate growth in the Eurozone as a whole. (Although unfortunately we cannot be sure what the ECB would do in the absence of this constraint.). However another factor is attitudes in Germany, which is what this post is about.
                Some of this just reflects national interest in the context of a relatively healthy macroeconomic position. In particular, unemployment is remarkably low. The chart below compares Germany to the OECD as a whole.

Unemployment in Germany and the OECD (%, Source OECD Economic Outlook)
The OECD story is all about the 2008/9 recession, of course. In the case of Germany you would be forgiven for asking – what recession? The reasons for this remarkable performance are not fully understood (see here for example – GDP fell by 2.6 5.1% in 2009), but the upshot is that the pressure from high unemployment felt in other countries is absent in Germany.
                GDP growth was slightly negative in the final quarter of last year, and it looks weak this year (OECD forecast 0.6%), and still not great in 2013 (OECD forecast 1.9%).  So from one point of view we might think there is scope for some (quick) stimulus? But inflation is projected to be only a little below 2%. The OECD think the output gap was negative but slightly less than 1% in 2011, while the German Council of Economic Experts estimate output is more than 1% above potential. In these circumstances the case for stimulus does not look strong.
                This is not the full story. The striking thing about Germany is that there appears to be no discussion of possible stimulus. In other countries the fourth quarter data, the prospects this year, plus uncertainties about growth in the rest of the Eurozone, might be expected to lead to some discussion of the need for some precautionary stimulus. Yet this seems almost completely absent in Germany. Arguably there is more discussion outside Germany than within (see Tyler Cowen vs Paul Krugman here for example).
                As long as I can remember, there has been an aversion to countercyclical fiscal policy within the German economic policy establishment. (Those already irritated by my personal anecdotes can skip the rest of this paragraph.) My first job when I worked in the UK Treasury was forecasting the European economies. It was just after the first oil price shock, and the UK and world were in recession. The Chancellor Dennis Healey wanted to use fiscal policy to stimulate the economy. The Treasury’s Chief Economic Advisor was invited to meet his counterparts in a short trip to Germany, and I was selected as a note taker. I remember this occasion not so much for the hospitality (which was excellent), but because I cut my long hair just before the trip, and therefore stopped looking like (acting like?) a hippy. I think I thought German officialdom might be slightly shocked if I did not. Anyway, I also remember that my senior colleague’s attempts to sell the UK view on fiscal stimulus – both in that context and as a general concept – met with a pretty unfavourable response.
                All this is part of ‘Ordoliberalism’: see this excellent post by Henry Farrell, or this from Dullien and GuĂ©rot at the European Council on Foreign Relations (HT Phillip Lane). It is not some simple hangover from the inflation of the Weimar Republic, or the Depression that followed. Christopher Allen suggests that Ordoliberalism was in part a reaction to the abuses of state power by the Nazis. Recall also that while money supply targeting was only fleetingly tried in the UK and US, it was the official policy of the Bundesbank from 1975 until the formation of the Eurozone. (Some argue its actual policy could be better described as ‘flexible monetarism’ and was not that different from inflation targeting).
                The chances of Germany assisting adjustment in the Eurozone by enacting a fiscal stimulus programme are therefore very slim indeed. Equally unfortunate may be the influence this anti-Keynesian view has on policy in the Euro area more generally. However, in the longer term I wonder if Ordoliberalism and Keynesian ideas are really that incompatible. Dullien and Guerot define the central tenet of Ordoliberalism as “governments should regulate markets in such a way that market outcome approximates the theoretical outcome in a perfectly competitive market”. The New Keynesian view of stabilisation policy is to bring the economy as close as possible to the market equilibrium that would prevail if prices were flexible. That does not sound so different.
With flexible exchange rates stabilisation would normally be done by monetary policy (to get to the natural real interest rate), but in a monetary union it has to be done using fiscal policy. I have argued that the antagonism to Keynesian policy in sections of US academia may in part reflect an outdated reaction to old fashioned Keynesian ideas and the ideology that was perceived to go with it. Perhaps the same may be true in Germany. As Henry Farrell notes, after the current recession the German government was eventually persuaded to enact a fiscal stimulus package. [For much more detail on this episode, see the new paper by Farrell and Quiggin available from Crooked Timber.] Although moderate by UK or US standards, it suggests the ideology is not immutable. For the moment, unfortunately, the ideology in its present form continues to do the Eurozone serious damage.

Wednesday, 7 March 2012

What have microfoundations ever done for us?

For those who do not know the reference, which I think is apposite, see here.

In my previous post on microfoundations I said I disagreed with Paul Krugman’s statement that “So as I see it, the whole microfoundations crusade is based on one predictive success some 35 years ago; there have been no significant payoffs since”. I should say why I disagree. Robert Waldmann has also challenged me along similar lines.
I think the two most important microfoundation led innovations in macro have been intertemporal consumption and rational expectations. I have already talked about the former in an earlier post, which focused on a UK specific puzzle in the late 1980s that I find difficult to address without an intertemporal consumption perspective. However I also think too strong an attachment to a very basic intertemporal view has blinded macro to some critical events in the last decade or so. (See John Muellbauer here, for example.) So let me focus on rational expectations. Again I could look at the UK in 1980/81 and talk about Dornbusch overshooting, but let me try and be less parochial.
Between the rapid inflation of the 1970s and the Great Recession, what events might we look to for rational expectations to help explain? It is not an easy question, because the adoption of rational expectations was not the result of some previous empirical failure. Instead it represented, as Lucas said, a consistency axiom. However between the 1970s and the Great Recession what needs explaining is why nothing very dramatic happened – the Great Moderation. In particular, why did the large rise in oil and other commodity prices around 2005 not lead to the kind of stagflation we saw in the mid-70s and early 80s?
I think an important part of the answer was implicit or explicit inflation targeting by independent central banks. That, in turn, reflected an understanding of the importance of rational expectations. If a central bank had a clear inflation objective, and established a reputation in achieving it, that would anchor expectations and reduce the impact of shocks on the macroeconomy. Just as the Friedman’s expectations augmented, accelerationist Phillips curve helped us understand what went wrong in the 1970s, so the New Keynesian Phillips curve led to better policy around the turn of the century.
Now virtually any empirical claim in macro is contestable. (Indeed, for some this is part of the attraction of the microfoundations approach!) There are other explanations of the weak response to oil price increases, although Blanchard and Gali (2007) do argue that the Great Moderation played an important role. Others might suggest that the Great Recession itself proved that the Great Moderation was an illusion. In a crude sense this does not follow. The Great Moderation was all about the stabilising role that monetary policy can play, and that should always (given Japan) have been conditional on not hitting the zero lower bound. A more challenging argument is that the Great Moderation prepared the ground for the financial crisis, but even if this is correct it does not follow that inflation targeting was not an improvement on what went before – we may just need to do better still. Indeed, if as a result of the Great Recession inflation targets are replaced by price level or nominal GDP targets, I believe rational expectations will be central in making that case.
I think macroeconomics today is much better than it was 40 years ago as a result of the microfoundations approach. I also argued in my previous post that a microfoundations purist position – that this is the only valid way to do macro – is a mistake. The interesting questions are in between. Can the microfoundations approach embrace all kinds of heterogeneity, or will such models lose their attractiveness in their complexity? Does sticking with simple, representative agent macro impart some kind of bias? Does a microfoundations approach discourage investigation of the more ‘difficult’ but more important issues? Might both these questions suggest a link between too simple a micro based view and a failure to understand what was going on before the financial crash? Are alternatives to microfoundations modelling methodologically coherent? Is empirical evidence ever going to be strong and clear enough to trump internal consistency? These are difficult and often quite subtle questions that any simplistic for and against microfoundations debate will just obscure. 

Tuesday, 6 March 2012

The Other Eurozone Crisis

                What follows is not a new story: many people have argued that the problems of the Eurozone are as much about private sector expansion, current account imbalances and misalignment, as they are about excessive debt. What follows is an attempt to present this argument in as clear and convincing a way as possible, and say why this matters.
One of the central pieces of macro I teach undergraduates is an adaptation of the Swan diagram. For non-economists this simply plots a demand curve and a supply curve in national competitiveness and output space. As competitiveness improves, exports increase and imports fall, because the demand for domestic output rises. More technically, it describes an economy made up of producers of differentiated traded goods sold in imperfectly competitive markets, so the aggregate demand curve has an obvious interpretation. I draw the supply curve downward sloping following the textbook I use, but it could equally well be vertical.
Here is the diagram applied to the periphery and some quite central Eurozone economies.

The formation of the Eurozone led to substantial monetary easing in these economies, partly because financial markets wrongly thought that they were subject to risk levels not much different from Germany. The following table looks at two measures of real interest rates (long and short). I’ve missed off 1998-9 because entry was anticipated, so it is not clear where to put these years. I’ve taken current (CPI) inflation away from nominal rates, but hopefully with averages like these using actual rather than expected inflation is not too great a sin. Data is taken from OECD Economic Outlook.


Average Real Interest rates in the Eurozone

Short (%)


Long (%)



1990-97
2000-07
2008-11
1990-97
2000-07
2008-11
Germany
3.6
-0.7
-0.5
4.4
2.6
1.5
France
5.1
-0.8
-0.2
5.4
2.4
1.8
Italy
5.7
-1.1
0.4
6.6
2.2
2.4
Greece
18.4
-2.0
3.8



Ireland
4.7
-2.4
4.0
4.4
0.9
6.1
Portugal
5.4
-1.8
2.4
7.4
1.5
4.4
Spain
5.7
-2.1
0.3
6.0
1.2
2.3
Euro
4.3
-1.1
0.0




Real interest rates fell everywhere in the 2000-7 period (global savings glut?), but the fall was more modest in Germany than anywhere else. Lower real interest rates shift the AD curve to the right. With sticky prices we initially move horizontally from the 2000 point to the new demand curve (competitiveness changes slowly). However, as we are to the right of the supply curve, we get inflation and a loss of competitiveness until we reach 2007. From this date onwards country specific risk begins to return, and the aggregate demand curve shifts back. We need to go back to something like the 2000 position, which requires a recession and an internal devaluation to restore competitiveness. Different countries are at different stages in this process. The country which is furthest on the road back to a sustainable position is probably Ireland (although there are some statistical problems), but in others the process is only just beginning. Given low inflation in Germany, and the difficulty of cutting nominal wages, this road may be particularly painful and long.
Are the falls in real interest rates shown above enough to explain the loss in competitiveness seen in most Eurozone countries relative to Germany? It could be that in many periphery countries, particularly those that experienced housing booms, the key factor was a shift in risk perceptions. This shift could have been triggered by lower interest rates themselves (as some have argued for other countries like the US), or other financial supply side factors (see here, but also here).  Which is true may be important because it is related to how sustainable the original shift in the AD curve was. In theory a persistent reduction in real interest rates could lead to a prolonged shift in the demand curve, and reduced competitiveness, with no reversal of the sort shown in this chart. This is equivalent to asking how sustainable are the pattern of current account deficits and surpluses that emerged in the Eurozone in 2007. As I argued here, the balance of evidence suggests that these imbalances were unsustainable. Here are current accounts over this period.

Eurozone Current Accounts (as % GDP)

The Eurozone position as a whole (not shown) has hardly changed. The swing to surplus in Germany after 2000 is dramatic. Although the current account positions in France and Italy have deteriorated, this is not nearly as large as the deterioration in the smaller countries. This may also be an indication that in the smaller countries the demand stimulus generated by lower interest rates may have been much greater (the rightward shift in the AD curve larger), perhaps caused by excessive risk taking (housing bubbles etc).
This story is all about monetary conditions. Monetary easing was greater in the periphery countries when the Euro was created, partly because policy had been tighter there before Eurozone entry, and partly because risk premiums disappeared. If you look at fiscal policy, on the other hand, you find no comparable pattern. Overall fiscal policy, as measured by underlying deficits calculated by the OECD, became a little tighter in the Eurozone as a whole over the 2000-7 period. As the Chart below shows, Spain actually tightened quicker than this average, and fiscal policy was broadly unchanged in Ireland and Portugal. Even in Greece, the story is more a gradual reversion to previous bad old ways, after a pre-entry tightening. So the shift in the AD curve was not, with the exception of Greece, a result of a fiscal expansion.

Eurozone Fiscal Positions (Underlying deficit as % GDP, OECD Economic Outlook)

This does not imply that fiscal policy was appropriate in those countries over this period. I argued in an earlier post that fiscal policy should have been much tighter, to offset the monetary stimulus. But it is important to distinguish between what actually happened (a monetary stimulus) and what might have been (countercyclical fiscal policy).
                This is a story about monetary conditions and aggregate demand. Of course it is possible in theory that reduced competitiveness relative to Germany could represent some sort of wage push. However, the strong growth experienced in these countries on Euro entry is consistent with the diagram above, and is less consistent with a cost-push shock. In addition, the labour share has not shown any marked increase in most Eurozone economies over this period (see here). The fact that wages rose ahead of productivity outside Germany reflects relative demand conditions (see here). None of this takes away from the need to deal with underlying structural problems in many of these countries – it just says demand shocks can occur, and in a monetary union their impact can be substantial and persistent. This is a very familiar story in terms of both the historical experience of fixed exchange rate regimes and the academic literature.
                This analysis tells us why many Eurozone countries would be in recession today, even if there had been no debt crisis. Why is this important? After all, the debt crisis is real enough, and the implications – recession in many Eurozone countries – are the same. It is important because it pinpoints the nature of the fundamental policy error that was made in the Eurozone. It was not that the Stability and Growth Pact (SGP) was ineffective – it was, but this did not lead to excessive fiscal expansion (Greece excepted) as the table above shows. The problem with the SGP was that it ignored countercyclical fiscal policy. (I argue here that the SGP’s focus on deficits actually encouraged governments not to do the right thing.) If countries had responded to their deteriorating competitiveness position relative to Germany by tightening fiscal policy, the unsustainable shift in the AD curve shown above would have been at least reduced in size. In addition, the market’s fear about fiscal sustainability would have been greatly reduced. On both counts we might have avoided recession today.
                The really sad thing is that the Eurozone is continuing to make the same mistake. Such a collective failure by policy makers is really difficult to comprehend, although I will discuss in a later post how this may be related to economic orthodoxy in Germany.




Sunday, 4 March 2012

Microfoundations and the Speed of Model Development

                In a recent post I suggested one microfoundations based argument for what Blanchard and Fischer call useful models, and I call aggregate models. Both Mark Thoma and Paul Krugman picked up on it, and I want to respond to both. While this will mostly be of interest to economists, I have not tagged this ‘for economists’ because I know from comments that some non-economists who think about the philosophy of (social) science find this interesting. If you are not in either category, this post is probably not for you.
                Paul Krugman first. He makes a number of relevant points, but the bit I like best in his post is where he says “Wren-Lewis is on my side, sort of”. I really like the ‘sort of’. Let me say why.
                In one sense I am on his side. I do not believe that the one and only way to think about macroeconomics is to analyse microfounded macromodels. I think too many macroeconomists today think this is the only proper way to do analysis, and this leads to a certain microfoundations fetishism which can be unhelpful. Aggregate models without microfoundations attached can be useful. On the other hand I really do not want to take sides on this issue. Most of the work I have done in the last decade has involved building and analysing microfounded macromodels, and I’ve done this because I think it is a very useful thing to do. Taking sides could too easily degenerate into a ‘for and against’ microfoundations debate – in such a debate I would be on both sides. I certainly do not agree with this: “So as I see it, the whole microfoundations crusade is based on one predictive success some 35 years ago; there have been no significant payoffs since.” The justification for aggregate models that I gave in my previous post was deliberately four square within the microfoundations methodology because I wanted to convince, not antagonise. So ‘sort of’ suits me just fine.
                What I am against is what I have called elsewhere the ‘microfoundations purist’ position. This is the view that if some macroeconomic behaviour does not have clear microfoundations, then any respectable academic macroeconomist cannot include it as part of a macromodel. Why do I think this is wrong? This brings me to Mark Thoma, who linked my piece with one he had written earlier on New Old Keynesians. Part of that piece describes why economists might, at least temporarily, forsake microfounded models in favour of a ‘useful’ (to use Blanchard and Fischer’s terminology) model from the past. To quote

“The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation....”

and

“So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide.”

I think this identifies a second reason why an aggregate model – a model without explicit microfoundations – might be preferred to microfounded alternatives, which Paul Krugman also covers in his point (3). This has to do with the speed at which microfoundations macro develops.
                Developing new microfounded macromodels is hard. It is hard because these models need to be internally consistent. If we think that, say, consumption in the real world shows more inertia than in the baseline intertemporal model, we cannot just add some lags into the aggregate consumption function. Instead we need to think about what microeconomic phenomena might generate that inertia. We need to rework all relevant optimisation problems adding in this new ingredient. Many other aggregate relationships besides the consumption function could change as a result. When we do this, we might find that although our new idea does the trick for consumption, it leads to implausible behaviour elsewhere, and so we need to go back to the drawing board. This internal consistency criteria is partly what gives these models their strength.
                It is very important to do all this, but it takes time. It takes even longer to convince others that this innovation makes sense. As a result, the development of microfounded macromodels is a slow affair. The most obvious example to me is New Keynesian theory. It took many years for macroeconomists to develop theories of price rigidity in which all agents maximised and expectations were rational, and still longer for them to convince each other that some of these theories were strong enough to provide a plausible basis for Keynesian type business cycles.
                A more recent example, and one more directly relevant to Mark Thoma’s discussion, is the role of various financial market imperfections in generating the possibility of a financial crisis of the type we have recently experienced. There is a lot of important and fascinating work going on in this area: Stephen Williamson surveys some of it here. But it will take some time before we work out what matters and what does not. In the meantime, what do we do? How should policy respond today?
                To answer those questions, we will have to fall back on models that contain elements that appear ad hoc, by which I mean that they do not as yet have clear and widely accepted microfoundations. Those models may contain elements discussed by past economists, like Keynes or Minsky, who worked at a time before the microfoundations project took hold. Now microfoundations purists would not (I would hope) go so far as to say that kind of ad hoc modelling should not be done. What they might well say is please keep it well away from the better economic journals. Do this ad hoc stuff in central banks, by all means, but keep it out of state of the art academic discourse. (I suspect this is why such models are sometimes called ‘policy models’.)
                This microfoundations purist view is a mistake. It is a mistake because it confuses ‘currently has no clear microfoundations’ with ‘cannot ever be microfounded’.  If you could prove the latter, then I would concede that – from a microfoundations perspective – you would not be interested in analysing this model. However our experience shows that postulated aggregate behaviour that does not have a generally accepted microeconomic explanation today may well have one tomorrow, when theoretical development has taken place. New Keynesian analysis is a case in point. Do the purists really want to suggest that, prior to 1990 say, no academic paper should have considered the implications of price stickiness?
                So here I would suggest is a second argument for using aggregate (or useful, or ad hoc) models. Unlike my first, it allows these models not to have any clear microfoundations at present. Such analysis should be respected if there is empirical evidence supporting the ad hoc aggregate relationship, and if the implications of that relationship could be important. In these circumstances, it would be a mistake for academic analysis to have to wait for the microfoundations work to be done. (This idea is discussed in more detail in “Internal consistency, price rigidity and the microfoundations of macroeconomics” Journal of Economic Methodology (2011) Vol. 18, 129-146 - earlier version here.).

Saturday, 3 March 2012

Fiscal policy heroes

                Have you ever wondered why anyone becomes a football (soccer) referee? You are almost certain to be hated by one team and its fans because of the decisions you make. Any mistakes you make will be subject to detailed media scrutiny. Your financial rewards are dwarfed by those earned by the people you attempt to control, and to thank you for that they regularly abuse your attempts to do your job.
                One final thought from my trip to Paris is that we can ask the same question of those who run national fiscal councils. There is no glory and little power to their positions. At some time or other, they will almost certainly incur the wrath of some senior politician, who will publicly assert that the head of the fiscal council is either incompetent or politically motivated. They will have committed the sin of questioning the financial logic behind the minister’s pet project, or the assertion that certain tax breaks or spending projects can be afforded on a sustainable basis. While sections of the media can be the fiscal council’s friend, because they recognise unbiased analysis when they see it often enough, other – more partisan – sections of the media will have no scruples in doing the politician’s dirty work by slandering the fiscal council and its head.
                Who are the people who run fiscal councils, and why do they take on this thankless role? They are people like Alice Rivlin, the first director of the Congressional Budget Office in the US, who would not go along with Reagan’s optimism about self financing tax cuts. Or like Lars Calmfors, the first director of the Swedish Fiscal Council, about which the finance minister said “I have established the earned income tax credit and the Fiscal Policy Council. I am convinced that at least one of the two is very useful. I am very doubtful of the other” (Calmfors and Wren-Lewis, 2011). Like George Kopits, director of the Hungarian Fiscal Council which was abolished after only two years existence. Like Kevin Page, head of Canada’s Parliamentary Budget Office, whom the Finance Minister recently called “unbelievable, unreliable, incredible” after he questioned the rationale for pension reform.
                Having talked to them all, I can suggest one common motivation: a belief that economic decisions should be based on sound analysis rather than political calculation or whim.  A view that not only should a proper analysis be done, but that the public has a right to know about it.
                As I have suggested, this role can be thankless at the time. The silver lining is that in the longer term it may be more appreciated. It has been said that Alice Rivlin is now one of the most respected figures in Washington. Like the public finances themselves, short term unpopularity may be appreciated in the longer term.