Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label RBC. Show all posts
Showing posts with label RBC. Show all posts

Friday, 16 August 2019

How should academic economics cope with ideological bias


This question was prompted by this study by Mohsen Javdani and Ha-Joon Chang, which tries to show two things: mainstream economists are biased against heterodox economists, and also tend to favour statements by those close to their own political viewpoint, particularly on the right. I don’t want to talk here about the first bias, or about the merits or otherwise of this particular study. Instead I will take it as given that ideological bias exists within mainstream academic economists (and hereafter when I just say ‘academic economics’ I’m only talking about the mainstream), as it does with many social sciences. I take this as given simply because of my own experience as an economist.

I also, from my own experience, want to suggest that in their formal discourse (seminars, refereeing etc) academic economists normally pretend that this ideological bias does not exist. I cannot recall anyone in any seminar saying something like ‘you only assume that because of your ideology/politics’. This has one huge advantage. It means that academic analysis is judged (on the surface at least) on its merits, and not on the basis of the ideology of those involved.

The danger of doing the opposite should be obvious. Your view on the theoretical and empirical validity of an academic paper or study may become dependent on the ideology or politics of the author or the political implications of the results rather than its scientific merits. Having said that, there are many people who argue that economics is just a form of politics and economists should stop pretending otherwise. I disagree. Economics can only be called a science because it embraces the scientific method. The moment evidence is routinely ignored by academics because it does not help some political project economics stops being the science it undoubtedly is.

Take, for example, the idea - almost an article of faith in the Republican party - that we are on the part of the Laffer curve where tax cuts raise revenue. The overwhelming majority, perhaps all, of academic economic studies find this to be false. If economics was merely politics in disguise, this would not be the case. This is also what distinguishes academic economics and some of the economics undertaken by certain think tanks, where results always seem to match the political or ideological orientation of the think tank.

There is a danger, however, in pretense going too far. This can be particularly true in subjects where empirical criticism of assumptions or parameterisation is weak. I think this was the basis of Paul Romer’s criticism of growth theory and microfoundations macro for what he calls mathiness, and by Paul Pfleiderer for what he calls ‘chameleon models’ in finance and economics. If authors choose assumptions simply to derive a particular politically convenient result, or stick to simplifications simply because it produces results that conform to some ideological viewpoint, it seems absurd to ignore this.

Romer’s discussion suggests that it is at least possible for ideological bias to send a branch of economics off in the wrong direction for some time. I would argue, for example, that Real Business Cycle theory in business cycle macro, which was briefly dominant around 40 years ago, was in part influenced by a desire among those who championed it to look for models where policy had little role. In addition, it showed up economists tendency to ignore other social sciences, or even common sense, at its worse. [1] It didn’t last because explaining cycles is so much easier when you assume sticky prices, as most macroeconomists now do, but it may be possible that other aspects of mainstream economics may be ideologically driven and persist for a much longer time (Pareto optimality?), and mainstream economists should always be aware of that possibility. One of my first posts was about the influence of ideology on the reaction of some economists to Keynesian fiscal stimulus.

The basic problem arises in part because empirical results are never clear cut and conclusive. For example the debate about whether increases in the minimum wage reduce employment continues, despite plenty of empirical work that suggests it does not, because there is some evidence that points the other way. This opens the way for ideology to have an influence. But the political implications of academic economics will always mean that ideology plays a role, whatever the evidence. Even when evidence is clear, as it is for the continuing importance of gravity (how close two countries are to each other) for trade for example, it is possible for an academic economist to claim gravity no longer matters and gain a huge amount of publicity for their work that assumes this. This is an implication of academic freedom, although in the case of economics, I still think there is a role for an organisation like (in the UK) the Royal Economic Society to point out what the academic consensus is.

Does this mean economics is not a true science? No, because ideological influence does not trump data when the data is very clear, as in the case of the Laffer curve or gravity equations, although ideology and academic freedom may allow the occasional maverick to go against the consensus. That in turn means that it is important for any user of economics to be aware of possible ideological bias, and always establish what the consensus is, if it exists, on an issue. Could ideology influence the direction particular areas of economics take for some time? The evidence cited above suggests yes. So while I have no quarrel with the pretense that ideology is absent from academic economics in formal discourse, academics should always be aware of its existence. In this respect, some of the points that the authors of this study mention in the discussion section of their paper are relevant. 


[1] This reflected the introduction of a microfoundations methodology which soon began to dominate the discipline, and which I have talked about elsewhere (e.g. here and here).




Wednesday, 26 October 2016

Being honest about ideological influence in economics

Noah Smith has an article that talks about Paul Romer’s recent critique of macroeconomics. In my view he gets it broadly right, but with one important exception that I want to pursue here. He says the fundamental problem with macroeconomics is lack of data, which is why disputes seem to take so long to resolve. That is not in my view the whole story.

If we look at the rise of Real Business Cycle (RBC) research a few decades ago, that was only made possible because economists chose to ignore evidence about the nature of unemployment in recessions. There is overwhelming evidence that in a recession employment declines because workers are fired rather than choosing not to work, and that the resulting increase in unemployment is involuntary (those fired would have rather retained their job at their previous wage). Both facts are incompatible with the RBC model.

In the RBC model there is no problem with recessions, and no role for policy to attempt to prevent them or bring them to an end. The business cycle fluctuations in employment they generate are entirely voluntary. RBC researchers wanted to build models of business cycles that had nothing to do with sticky prices. Yet here again the evidence was quite clear: for example data on real and nominal exchange rates shows that aggregate prices are slow to adjust. It is true that it took the development of New Keynesian theory to establish robust reasons why prices might be sticky enough to generate business cycles, but normally you do not ignore evidence (that prices are sticky) until you have a good explanation for that evidence.

Why would researchers try to build models of business cycles where these cycles required no policy intervention, and ignore key evidence in doing so? The obvious explanation is ideological. I cannot prove it was ideological, but it is difficult to understand why - in an area which as Noah says suffers from a lack of data - you would choose to develop theories that ignore some of the evidence you have. The fact that, as I argue here, this bias may have expressed itself in the insistence on following a particular methodology at the expense of others does not negate the importance of that bias.

I do not think this is just a problem in macroeconomics. David Card is a very well respected labour economist, who was the first to present detailed empirical evidence that imposing a minimum wage might not reduce employment (as the standard supply and demand model would predict). He gave an interview some time ago (2006), where he said this about the reaction to this work:

“I've subsequently stayed away from the minimum wage literature for a number of reasons. First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.”

As Card points out in the interview his research involved no advocacy, but was simply about examining empirical evidence. So the friends that he lost objected not to the policy position he was taking, but to him uncovering and publishing evidence. Suppressing or distorting evidence because it does not give the answer you want is almost a definition of an illegitimate science.

These ex-friends of David Card are not typical of academic economists. After all, his research was published and became seminal in subsequent work. Theory has evolved (see again his interview) to make sense of his findings, but unlike the case of macro the findings were not ignored until this happened. Even in the case of macro, as Noah says, it was New Keynesian theory that became the consensus theory of business cycles rather than RBC models.

Yet I suspect there is a reluctance among the majority of economists to admit that some among them may not be following the scientific method but may instead be making choices on ideological grounds. This is the essence of Romer’s critique, first in his own area of growth economics and then for business cycle analysis. Denying or marginalising the problem simply invites critics to apply to the whole profession a criticism that only applies to a minority.



Saturday, 24 September 2016

What is so bad about the RBC model?

This post has its genesis in a short twitter exchange storified by Brad DeLong

DSGE models, the models that mainstream macroeconomists use to model the business cycle, are built on the foundations of the Real Business Cycle (RBC) model. We (almost) all know that the RBC project failed. So how can anything built on these foundations be acceptable? As Donald Trump might say, what is going on here?

The basic RBC model contains a production function relating output to capital (owned by individuals) and labour plus a stochastic element representing technical progress, an identity relating investment and capital, a national income identity giving output as the sum of consumption and investment, marginal productivity conditions (from profit maximisation by perfectly competitive representative firms) giving the real wage and real interest rate, and the representative consumer’s optimisation problem for consumption, labour supply and capital. (See here, for example.)

What is the really big problem with this model? Not problems along the lines of ‘I would want to add this’, but more problems like I would not even start from here. Let’s ignore capital, because in the bare bones New Keynesian model capital does not appear. If you were to say giving primacy to shocks to technical progress I would agree that is a big problem: all the behavioural equations should contain stochastic elements which can also shock this economy, but New Keynesian models do this to varying degrees. If you were to say the assumption of labour market clearing I would also agree that is a big problem.

However none of the above is the biggest problem in my view. The biggest problem is the assumption of continuous goods market clearing aka fully flexible prices. That is the assumption that tells you monetary policy has no impact on real variables. Now an RBC modeller might say in response how do you know that? Surely it makes sense to see whether a model that does assume price flexibility could generate something like business cycles?

The answer to that question is no, it does not. It does not because we know it cannot for a simple reason: unemployment in recessions is involuntary, and this model cannot generate involuntary unemployment, but only voluntary variations in labour supply as a result of short term movements in the real wage. Once you accept that higher unemployment in recessions is involuntary (and the evidence for that is very strong), the RBC project was never going to work.

So how did RBC models ever get off the ground? Because the New Classical revolution said everything we knew before that revolution should be discounted because it did not use the right methodology. And also because the right methodology - the microfoundations methodology - allowed the researcher to select what evidence (micro or macro) was admissible. That, in turn, is why the microfoundations methodology has to be central to any critique of modern macro. Why RBC modellers chose to dismiss the evidence on involuntary unemployment I will leave as an exercise for the reader.

The New Keynesian (NK) model, although it may have just added one equation to the RBC model, did something which corrected its central failure: the failure to acknowledge the pre-revolution wisdom about what causes business cycles and what you had to do to combat them. In that sense its break from its RBC heritage was profound. Is New Keynesian analysis still hampered by its RBC parentage? The answer is complex (see here), but can be summarised as no and yes. But once again, I would argue that what holds back modern macro much more is its reliance on its particular methodology.

One final point. Many people outside mainstream macro feel happy to describe DSGE modelling as a degenerative research strategy. I think that is a very difficult claim to substantiate, and is hardly going to convince mainstream macroeconomists. The claim I want to make is much weaker, and that is that there is no good reason why microfoundations modelling should be the only research strategy employed by academic economists. I challenge anyone to argue against my claim.




Wednesday, 25 March 2015

Why do central banks use New Keynesian models?

And more on whether price setting is microfounded in RBC models. For macroeconomists.

Why do central banks like using the New Keynesian (NK) model? Stephen Williamson says: “I work for one of these institutions, and I have a hard time answering that question, so it's not clear why Simon wants David [Levine] to answer it. Simon posed the question, so I think he should answer it.” The answer is very simple: the model helps these banks do their job of setting an appropriate interest rate. (I suspect because the answer is very simple this is really a setup for another post Stephen wants to write, but as I always find what Stephen writes interesting I have no problem with that.)

What is a NK model? It is a RBC model plus a microfounded model of price setting, and a nominal interest rate set by the central bank. Every NK model has its inner RBC model. You could reasonably say that these NK models were designed to help tell the central bank what interest rate to set. In the simplest case, this involves setting a nominal rate that achieves, or moves towards, the level of real interest rates that is assumed to occur in the inner RBC model: the natural real rate. These models do not tell us how and why the central bank can set the nominal short rate, and those are interesting questions which occasionally might be important. As Stephen points out, NK models tell us very little about money. Most of the time, however, I think interest rate setters can get by without worrying about these how and why questions.

Why not just use the restricted RBC version of the NK model? Because the central bank sets a nominal rate, so it needs an estimate of what expected inflation is. It could get that from surveys, but it also wants to know how expected inflation will change if it changes its nominal rate. I think a central banker might also add that they are supposed to be achieving an inflation target, so having a model that examines the response of inflation to the rest of the economy and nominal interest rate changes seems like an important thing to do.

The reason why I expect people like David Levine to at least acknowledge the question I have just answered is also simple. David Levine claimed that Keynesian economics is nonsense, and had been shown to be nonsense since the New Classical revolution. With views like that, I would at least expect some acknowledgement that central banks appear to think differently. For him, like Stephen, that must be a puzzle. He may not be able to answer that puzzle, but it is good practice to note the puzzles that your worldview throws up.

Stephen also seems to miss my point about the lack of any microfounded model of price setting in the RBC model. The key variable is the real interest rate, and as he points out the difference between perfect competition and monopolistic competition is not critical here. In a monetary economy the real interest rate is set by both price setters in the goods market and the central bank. The RBC model contains neither. To say that the RBC model assumes that agents set the appropriate market clearing prices describes an outcome, but not the mechanism by which it is achieved.

That may be fine - a perfectly acceptable simplification - if when we do think how price setters and the central bank interact, that is the outcome we generally converge towards. NK models suggest that most of the time that is true. This in turn means that the microfoundations of price setting in RBC models applied to a monetary economy rest on NK foundations. The RBC model assumes the real interest rate clears the goods market, and the NK model shows us why in a monetary economy that can happen (and occasionally why it does not). 


Thursday, 1 August 2013

ZLB Models?

There was a little interchange between Noah Smith and Paul Krugman a couple of weeks ago on what kind of models could explain Japan’s stagnation, and perhaps by implication the Great Recession. (Original Noah post here, Paul’s response here, and second round here and here.) I thought it was interesting, but it has taken me a bit of time to put my finger on why I thought it was interesting.

Noah began by saying there were two dominant macro models: RBC and New Keynesian (NK). The problem with applying NK models to Japan is that in NK models recessions last for as long as it takes for prices to fully adjust.  So how can NK models explain a lost decade or more? (You see this now in economists asking ‘how can the US, UK or Eurozone still be in a demand induced recession, from a shock that occurred 5 years ago’? Often the implication is that this is implausible, so the explanation must be supply side.) The answer, as Paul pointed out, is the Zero Lower Bound (ZLB). Noah replied that “They [ZLB models] are not yet well-developed or well-explored”.

Now I think Noah makes a lot of valid points, but I was unhappy about how his discussion was framed. I should also say that this framing is common to a lot of macroeconomists, so if I think it is unhelpful it is important to understand why.

It is often said that NK models just add price stickiness to RBC models, and if prices are sticky in the short run, aggregate demand matters in the short run. [1] I like to express it differently. What is the mechanism by which we can or cannot ignore aggregate demand? That mechanism is monetary policy, and how that is influenced by price adjustment. The way NK models can work is that price adjustment induces a monetary policy response, and it is the monetary policy response that ensures demand shortfalls are not persistent. Break the monetary policy response, because you hit the ZLB, and you break the correction mechanism, particularly if the monetary policy regime also involves inflation targets.

The ZLB therefore allows NK models to generate much more persistent recessions, if the recessionary shock is itself large and persistent. But the implications of the ZLB for RBC models are just as profound.  Implicit in their construction is that demand shocks ‘do not matter’, because the correction mechanism to get demand back to supply works sufficiently quickly that we can just focus on supply decisions. If the correction mechanism is broken because of the ZLB, then the foundation on which the model is built becomes problematic. It is no good saying ‘we assume price flexibility’, when even if prices adjust rapidly monetary policy cannot get demand back up. Or to put it another way, you cannot assume that the real interest rate will always be at the natural level if there is no way that real interest rate can be achieved.

That is one of the benefits (there are also costs) of the NK model encompassing the RBC framework. We can see the conditions under which the ‘special case’ of RBC works. And at the ZLB with inflation targets, it does not.

Of course you can ignore this point, and try to use RBC models to explain the current recession or Japan’s lost decade. But there are two huge problems with this. First, it ignores a big piece of evidence - these economies are at the ZLB! Well, that could just be a coincidence, or an inconsequential by-product. But second, the ZLB under inflation targets undercuts a key principle on which RBC models are built. In that sense, the model is not microfounded. [2] Thinking about mechanisms rather than models helps you see that second point. [3]

We can use NK models to analyse the implications of the ZLB, by hitting them with a large and persistent negative demand shock of some sort and adding the ZLB constraint. But what is clearly missing here is any understanding of the large and persistent negative shock. There is much current work looking at ‘financial frictions’, and the balance sheet implications that these may have. This may help explain the persistence of ZLB recessions. But they may also explain much more, and help improve the ability of NK models to track trends before the Great Recession. So to describe this endeavour as ZLB modelling seems inappropriate (or at least premature).

This approach to modelling ZLB recessions still has a unique steady state, and sees prolonged recessions as involving a natural real interest rate below its steady state value. An interesting possibility is that the ZLB constraint can create an alternative steady state, where a positive real interest rate is associated with deflation (see this paper by Mertens and Ravn (pdf), for example). The central bank (unlike Milton Friedman) is not happy with this steady state, because inflation is below target, but cannot shift to its preferred steady state by lowering interest rates. Whether you would call this alternative steady state a recession, and whether it could be applied to Japan, are interesting questions.

I do not think it is very informative to describe both this approach, and the more standard persistent demand shock approach, as ‘ZLB models’? The mechanism behind a persistent recession in either case is very different. But more fundamentally, they both use similar NK models, but just take the ZLB constraint seriously in that model. So it seems very odd to talk about NK models on the one hand, and ZLB models on the other, when the ZLB is an undeniable fact.

Now at this point you may be thinking that I am just being a bit pedantic about labels. I am not sure I should apologise if I am, but I do have another motivation. Talk of different models that can be applied to the same problem harks back to ‘schools of thought’ days in macro. I think macro should be better than that now. For better or worse, the microfoundations project and the new neoclassical synthesis gave us a common language, where we could talk about different mechanisms within a shared approach. That should make the process of matching evidence to theory more straightforward.


[1] Of course NK models often ignore the capital accumulation process, which is much more central to RBC analysis. But the key point is that we can always add sticky prices to any RBC model.

[2] There could be some other mechanism which justifies ignoring aggregate demand, but the whole point of microfoundations is that this mechanism needs to be spelt out. In its absence, all that is left is to just assume that large negative demand shocks never happen. Which is a bit like assuming nominal interest rates can be negative. 

[3] Chris Dillow’s comment that I link to here was really helpful in allowing me to appreciate why seeing macro in terms of competing models can be so confusing. In a way I just had to remember what it felt like learning macro for the first time, but that is easy to forget when you spend the rest of your life building and analysing these things.