Winner of the New Statesman SPERI Prize in Political Economy 2016


Showing posts with label Paul Romer. Show all posts
Showing posts with label Paul Romer. Show all posts

Friday, 16 August 2019

How should academic economics cope with ideological bias


This question was prompted by this study by Mohsen Javdani and Ha-Joon Chang, which tries to show two things: mainstream economists are biased against heterodox economists, and also tend to favour statements by those close to their own political viewpoint, particularly on the right. I don’t want to talk here about the first bias, or about the merits or otherwise of this particular study. Instead I will take it as given that ideological bias exists within mainstream academic economists (and hereafter when I just say ‘academic economics’ I’m only talking about the mainstream), as it does with many social sciences. I take this as given simply because of my own experience as an economist.

I also, from my own experience, want to suggest that in their formal discourse (seminars, refereeing etc) academic economists normally pretend that this ideological bias does not exist. I cannot recall anyone in any seminar saying something like ‘you only assume that because of your ideology/politics’. This has one huge advantage. It means that academic analysis is judged (on the surface at least) on its merits, and not on the basis of the ideology of those involved.

The danger of doing the opposite should be obvious. Your view on the theoretical and empirical validity of an academic paper or study may become dependent on the ideology or politics of the author or the political implications of the results rather than its scientific merits. Having said that, there are many people who argue that economics is just a form of politics and economists should stop pretending otherwise. I disagree. Economics can only be called a science because it embraces the scientific method. The moment evidence is routinely ignored by academics because it does not help some political project economics stops being the science it undoubtedly is.

Take, for example, the idea - almost an article of faith in the Republican party - that we are on the part of the Laffer curve where tax cuts raise revenue. The overwhelming majority, perhaps all, of academic economic studies find this to be false. If economics was merely politics in disguise, this would not be the case. This is also what distinguishes academic economics and some of the economics undertaken by certain think tanks, where results always seem to match the political or ideological orientation of the think tank.

There is a danger, however, in pretense going too far. This can be particularly true in subjects where empirical criticism of assumptions or parameterisation is weak. I think this was the basis of Paul Romer’s criticism of growth theory and microfoundations macro for what he calls mathiness, and by Paul Pfleiderer for what he calls ‘chameleon models’ in finance and economics. If authors choose assumptions simply to derive a particular politically convenient result, or stick to simplifications simply because it produces results that conform to some ideological viewpoint, it seems absurd to ignore this.

Romer’s discussion suggests that it is at least possible for ideological bias to send a branch of economics off in the wrong direction for some time. I would argue, for example, that Real Business Cycle theory in business cycle macro, which was briefly dominant around 40 years ago, was in part influenced by a desire among those who championed it to look for models where policy had little role. In addition, it showed up economists tendency to ignore other social sciences, or even common sense, at its worse. [1] It didn’t last because explaining cycles is so much easier when you assume sticky prices, as most macroeconomists now do, but it may be possible that other aspects of mainstream economics may be ideologically driven and persist for a much longer time (Pareto optimality?), and mainstream economists should always be aware of that possibility. One of my first posts was about the influence of ideology on the reaction of some economists to Keynesian fiscal stimulus.

The basic problem arises in part because empirical results are never clear cut and conclusive. For example the debate about whether increases in the minimum wage reduce employment continues, despite plenty of empirical work that suggests it does not, because there is some evidence that points the other way. This opens the way for ideology to have an influence. But the political implications of academic economics will always mean that ideology plays a role, whatever the evidence. Even when evidence is clear, as it is for the continuing importance of gravity (how close two countries are to each other) for trade for example, it is possible for an academic economist to claim gravity no longer matters and gain a huge amount of publicity for their work that assumes this. This is an implication of academic freedom, although in the case of economics, I still think there is a role for an organisation like (in the UK) the Royal Economic Society to point out what the academic consensus is.

Does this mean economics is not a true science? No, because ideological influence does not trump data when the data is very clear, as in the case of the Laffer curve or gravity equations, although ideology and academic freedom may allow the occasional maverick to go against the consensus. That in turn means that it is important for any user of economics to be aware of possible ideological bias, and always establish what the consensus is, if it exists, on an issue. Could ideology influence the direction particular areas of economics take for some time? The evidence cited above suggests yes. So while I have no quarrel with the pretense that ideology is absent from academic economics in formal discourse, academics should always be aware of its existence. In this respect, some of the points that the authors of this study mention in the discussion section of their paper are relevant. 


[1] This reflected the introduction of a microfoundations methodology which soon began to dominate the discipline, and which I have talked about elsewhere (e.g. here and here).




Saturday, 4 November 2017

The journalist as amateur scientist

Paul Romer has talked about two types of discourse, one political and one scientific. He uses that distinction to critique aspects of current practice among economists. I want to do the same for journalism.

Political discourse involves taking sides, and promoting things that your side favours. It is like a school debate: you consider only evidence that favours the point of view you want to promote. Scientific discourse involves considering each piece of evidence on its merits. You do not aim to promote, but assess and come to a conclusion based on the evidence. That does not prevent the scientist arguing a case, but their argument is based on considering all the relevant evidence. There are no sides that are always right or invariably wrong.

Of course, any scientist makes choices about what evidence is relevant, and this will be influenced by existing theories. Ideally the theory you prefer can be changed by new evidence, but scientists being only humans can sometimes be reluctant to accept evidence that contradicts long held theories. But there are always younger scientists looking for new ideas to make their name. The scientific method works in time, which is why we are where we are today.

My argument is that journalists should be like amateur scientists. Amateur because part of their work will involve seeking out expertise rather than starting from scratch, and they do not have the time or resources to investigate each story as a scientist might. A term frequently used is ‘investigative journalist’, but that normally means someone who has weeks to work on one story. Instead I’m talking about journalists who only have a day. The key point is that they should not search for evidence that fits the story they wanted to write before doing any research, but allow the evidence to shape the story.

For example, suppose the story is about EU immigrants and benefits. What a journalist should note is that unemployment among EU immigrants is lower than natives. What a journalist who wants to write a story that makes immigrants look bad might do is say that the number of EU immigrants without a job make up a city the size of Bristol. This combines selection of evidence (where is the equivalent figure for natives is not reported) with simple deception: most people conflate ‘without a job’ with ‘unemployed’, rather than being people happy looking after children, for example.

If this all strikes you as obvious, at least to journalists working in broadsheet newspapers, the example above is taken from the Telegraph, and the post in which I discuss it contains a tweet from a Times economics editor saying that all journalists (and yours truly) take a stance and select facts that supports this stance.

There is actually a third type of journalism, which you could call acrobatic discourse, because it is always looking for balance. It is sometimes called ‘shape of the earth: sides differ’ journalism. Its merit is that it appears not to take sides, but as this extended name is meant to demonstrate, it is certainly not scientific. It is the kind of journalism that says the claim that £350 million a week goes to Brussels and could be spent on the NHS is ‘contested’, rather than simply untrue. In that sense, it can be uninformative and misleading, whereas scientific reporting is informative and is not misleading. Here is a twitter thread from Eric Umansky on a particularly bad example from the New York Times. Of course acrobatic journalism is easier and keeps the journalist out of trouble.

One of the side effects of acrobatic journalism is that it typically defines the two sides it wishes to balance. It therefore tends to be consensus journalism, where the consensus is defined by the politicians on either side. To see why this is problematic you just need to look at how Brexit is discussed and reported by the BBC since the referendum.

I began writing this post during the debate surrounding Nick Robinson’s Steve Hewlett Memorial Lecture. It is certainly strange for that debate to focus on outfits like The Canary, rather than the elephants in the room that produce political journalism to millions every day, who also tend to criticise the BBC whenever they get the opportunity. Yet the copy from these newspapers, and not The Canary, is regularly discussed by the broadcast media. The emergence of left social media journalism is a result of the consensus defining by-product of acrobatic journalism, which for a year or more defined the other side as the PLP rather than the Labour leadership.

I suspect many journalists would say that my idea of them being an amateur scientist is just impractical in this day and age, when they have so little time and resources. But what I have in mind (journalism as amateur scientists) is not very different from what journalists on the Financial Times do day in and day out. Chris Cook is an example of a journalist working in the broadcast media who does the same. But it is wrong to blame individual journalists for being more acrobatic than scientific, because the institutions they work for often demand it.

Nick Robinson’s lecture is much more nuanced and interesting that the subsequent media discussion would suggest. For example he identifies the problem with the way Facebook selects news that is discussed in more detail by Zeynep Tufekci in this TED talk. But there are two elephants in the room that he fails to discuss: the role of the increasingly politicised right wing press I have already mentioned, and the conflict between scientific and acrobatic journalism, both of which he praises without addressing the conflicts between them. [1]

[1] There is a clear example of this in the comments he recalls making on the Brexit debate just before the vote. He proudly says he called the £350 million claim untrue, but he then adds

“I did, incidentally, also say that the Remain claim that every household in Britain would be £4,300 a year better off was misleading and impossible to verify.”

This is acrobatic journalism at its worse. Yes, the BBC did think the £4,300 figure was ‘misleading’, but only because they did not talk to an economist who would have told you it was not. It shows a failure to be a good amateur scientist. But worse that that, this clumsy attempt at balance puts the central claim of the Remain campaign in the same bracket as £350 million a week lie, which it certainly is not.

Wednesday, 26 October 2016

Being honest about ideological influence in economics

Noah Smith has an article that talks about Paul Romer’s recent critique of macroeconomics. In my view he gets it broadly right, but with one important exception that I want to pursue here. He says the fundamental problem with macroeconomics is lack of data, which is why disputes seem to take so long to resolve. That is not in my view the whole story.

If we look at the rise of Real Business Cycle (RBC) research a few decades ago, that was only made possible because economists chose to ignore evidence about the nature of unemployment in recessions. There is overwhelming evidence that in a recession employment declines because workers are fired rather than choosing not to work, and that the resulting increase in unemployment is involuntary (those fired would have rather retained their job at their previous wage). Both facts are incompatible with the RBC model.

In the RBC model there is no problem with recessions, and no role for policy to attempt to prevent them or bring them to an end. The business cycle fluctuations in employment they generate are entirely voluntary. RBC researchers wanted to build models of business cycles that had nothing to do with sticky prices. Yet here again the evidence was quite clear: for example data on real and nominal exchange rates shows that aggregate prices are slow to adjust. It is true that it took the development of New Keynesian theory to establish robust reasons why prices might be sticky enough to generate business cycles, but normally you do not ignore evidence (that prices are sticky) until you have a good explanation for that evidence.

Why would researchers try to build models of business cycles where these cycles required no policy intervention, and ignore key evidence in doing so? The obvious explanation is ideological. I cannot prove it was ideological, but it is difficult to understand why - in an area which as Noah says suffers from a lack of data - you would choose to develop theories that ignore some of the evidence you have. The fact that, as I argue here, this bias may have expressed itself in the insistence on following a particular methodology at the expense of others does not negate the importance of that bias.

I do not think this is just a problem in macroeconomics. David Card is a very well respected labour economist, who was the first to present detailed empirical evidence that imposing a minimum wage might not reduce employment (as the standard supply and demand model would predict). He gave an interview some time ago (2006), where he said this about the reaction to this work:

“I've subsequently stayed away from the minimum wage literature for a number of reasons. First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.”

As Card points out in the interview his research involved no advocacy, but was simply about examining empirical evidence. So the friends that he lost objected not to the policy position he was taking, but to him uncovering and publishing evidence. Suppressing or distorting evidence because it does not give the answer you want is almost a definition of an illegitimate science.

These ex-friends of David Card are not typical of academic economists. After all, his research was published and became seminal in subsequent work. Theory has evolved (see again his interview) to make sense of his findings, but unlike the case of macro the findings were not ignored until this happened. Even in the case of macro, as Noah says, it was New Keynesian theory that became the consensus theory of business cycles rather than RBC models.

Yet I suspect there is a reluctance among the majority of economists to admit that some among them may not be following the scientific method but may instead be making choices on ideological grounds. This is the essence of Romer’s critique, first in his own area of growth economics and then for business cycle analysis. Denying or marginalising the problem simply invites critics to apply to the whole profession a criticism that only applies to a minority.



Tuesday, 20 September 2016

Paul Romer on macroeconomics

It is a great irony that the microfoundations project, which was meant to make macro just another application of microeconomics, has left macroeconomics with very few friends among other economists. The latest broadside comes from Paul Romer. Yes it is unfair, and yes it is wide of the mark in places, but it will not be ignored by those outside mainstream macro. This is partly because he discusses issues on which modern macro is extremely vulnerable.

The first is its treatment of data. Paul’s discussion of identification illustrates how macroeconomics needs to use all the hard information it can get to parameterise its models. Yet microfounded models, the only models deemed acceptable in top journals for both theoretical and empirical analysis, are normally rather selective about the data they focus on. Both micro and macro evidence is either ignored because it is inconvenient, or put on a to do list for further research. This is an inevitable result of making internal consistency an admissibility criteria for publishable work.

The second vulnerability is a conservatism which also arises from this methodology. The microfoundations criteria taken in its strict form makes it intractable to model some processes: for example modelling sticky prices where actual menu costs are a deep parameter. Instead DSGE modelling uses tricks, like Calvo contracts. But who decides whether these tricks amount to acceptable microfoundations or are instead ad hoc or implausible? The answer depends a lot on conventions among macroeconomists, and like all conventions these move slowly. Again this is a problem generated by the microfoundations methodology.

Paul’s discussion of real effects from monetary policy, and the insistence on productivity shocks as business cycle drivers, is pretty dated. (And, as a result, it completely misleads Paul Mason here.) Yet it took a long time for RBC models to be replaced by New Keynesian models, and you will still see RBC models around. Elements of the New Classical counter revolution of the 1980s still persist in some places. It was only a few years ago that I listened to a seminar paper where the financial crisis was modelled as a large negative productivity shock.

Only in a discipline which has deemed microfoundations as the only acceptable way of modelling can practitioners still feel embarrassed about including sticky prices because their microfoundations (the tricks mentioned above) are problematic . Only in that discipline can respected macroeconomists argue that because of these problematic microfoundations it is best to ignore something like sticky prices when doing policy work: and argument that would be laughed out of court in any other science. In no other discipline could you have a debate about whether it was better to model what you can microfound rather than model what you can see. Other economists understand this, but many macroeconomists still think this is all quite normal.   

Thursday, 27 August 2015

The day macroeconomics changed

It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair (HT Mark Thoma), who was at the conference, it is the day that macroeconomics started to go wrong.

Ray Fair is a bit of a hero of mine. When I left the National Institute to become a formal academic, I had the goal (with the essential help of two excellent and courageous colleagues) of constructing a new econometric model of the UK economy, which would incorporate the latest theory: in essence, it would be New Keynesian, but with additional features like allowing variable credit conditions to influence consumption. Unlike a DSGE it would as far as possible involve econometric estimation. I had previously worked with the Treasury’s model, and then set up what is now NIGEM at the National Institute by adapting a global model used by the Treasury, and finally I had been in charge of developing the Institute’s domestic model. But creating a new model from scratch within two years was something else, and although the academics on the ESRC board gave me the money to do it, I could sense that some of them thought it could not be done. In believing (correctly) that it could, Ray Fair was one of the people who inspired me.

I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I wantmore bells and whistles on DSGE models”. I also disagree about rational expectations: the UK model I built had rational expectations in all the key relationships.

Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.

However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. I have talked before about the work that Carroll and Muellbauer have done which shows that you have to think about credit conditions if you want to explain the pre-crisis time series for UK or US consumption. DSGE models could avoid this problem, but more traditional structural econometric (aka CC) models would find it harder to do so. So perhaps if academic macro had given greater priority to explaining these time series, it would have been better prepared for understanding the impact of the financial crisis.

What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.


Wednesday, 19 August 2015

Reform and revolution in macroeconomics

Mainly for economists

Paul Romer has a few recent posts (start here, most recent here) where he tries to examine why the saltwater/freshwater divide in macroeconomics happened. A theme is that this cannot all be put down to New Classical economists wanting a revolution, and that a defensive/dismissive attitude from the traditional Keynesian status quo also had a lot to do with it.

I will leave others to discuss what Solow said or intended (see for example Robert Waldmann). However I have no doubt that many among the then Keynesian status quo did react in a defensive and dismissive way. They were, after all, on incredibly weak ground. That ground was not large econometric macromodels, but one single equation: the traditional Phillips curve. This had inflation at time t depending on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. Add rational expectations to that and you show that deviations from the natural rate are random, and Keynesian economics becomes irrelevant. As a result, too many Keynesian macroeconomists saw rational expectations (and therefore all things New Classical) as an existential threat, and reacted to that threat by attempting to rubbish rational expectations, rather than questioning the traditional Phillips curve. As a result, the status quo lost. [1]

We now know this defeat was temporary, because New Keynesians came along with their version of the Phillips curve and we got a new ‘synthesis’. But that took time, and you can describe what happened in the time in between in two ways. You could say that the New Classicals always had the goal of overthrowing (rather than improving) Keynesian economics, thought that they had succeeded, and simply ignored New Keynesian economics as a result. Or you could say that the initially unyielding reaction of traditional Keynesians created an adversarial way of doing things whose persistence Paul both deplores and is trying to explain. (I have no particular expertise on which story is nearer the truth. I went with the first in this post, but I’m happy to be persuaded by Paul and others that I was wrong.) In either case the idea is that if there had been more reform rather than revolution, things might have gone better for macroeconomics.

The point I want to discuss here is not about Keynesian economics, but about even more fundamental things: how evidence is treated in macroeconomics. You can think of the New Classical counter revolution as having two strands. The first involves Keynesian economics, and is the one everyone likes to talk about. But the second was perhaps even more important, at least to how academic macroeconomics is done. This was the microfoundations revolution, that brought us first RBC models and then DSGE models. As Paul writes:

“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”

The question I want to raise is whether for this strand as well, reform rather than revolution might have been better for macroeconomics.

First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.

Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.

I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.

It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.

What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.

These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.

There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative - where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics - was possible, but the New Classical and microfoundations revolution cast that possibility aside.

Did this matter? Were there costs to this strand of the New Classical revolution?

Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.

It is not just Keynesian economics where reform rather than revolution might have been more productive as a consequence of Lucas and Sargent, 1979.


[1] The point is not whether expectations are generally rational or not. It is that any business cycle theory that depends on irrational inflation expectations appears improbable. Do we really believe business cycles would disappear if only inflation expectations were rational? PhDs of the 1970s and 1980s understood that, which is why most of them rejected the traditional Keynesian position. Also, as Paul Krugman points out, many Keynesian economists were happy to incorporate New Classical ideas. 

Wednesday, 5 August 2015

A way forward for the centre left on deficits

When it comes to fiscal policy the politics of the right at the moment [1] could be reasonably described as deficit fetishism. The policy of the centre left in Europe could also with some justification be described as growing appeasement towards deficit fetishism. Given its success for the right in Europe, it seems unlikely that this side of the political spectrum will change its policy any time soon. [2] Things appear a little more malleable on the centre left. In the UK, in particular, we will shortly have new leaders of both Labour and the Liberal Democrats. In addition, the Scottish Nationalists have adopted the rhetoric of anti-austerity, even though their fiscal numbers were not far from the other opposition parties during the elections.

Attempts to get the centre left to avoid deficit fetishism need to fight on two separate fronts. First, politicians and/or their advisers need to be taught some macroeconomics. Academics too often assume that politicians either know more than they actually do, or have behind them a network of researchers some of whom do know some macroeconomics, or who have access to macro expertise. (I used to believe that.) The reality seems to be very different: through lack of resources or lack of interest, the knowledge of left of centre politicians and their advisers often does not extend beyond mediamacro.

The second front involves the politics of persuasion: how can politicians successfully persuade voters that deficit fetishism, far from representing responsible government, in fact represents a simplistic approach that can do (and has done) serious harm? I think for academics this is a far more difficult task for two reasons. First our skills are not those of an advertising agency, and we are trained to follow the scientific method rather than act as a lawyer arguing their case (although, if you believe Paul Romer, the scientific method is not universally adopted among macroeconomists). Second, the experience of the last five years on the centre left is that deficit fetishism helps win elections.

It my last post I tried to argue why the success of deficit fetishism was peculiar to a particular time: the period after the recession when households were also cutting back on their borrowing, and where the Eurozone crisis appeared to validate the case for austerity. In other times households try to borrow to invest in a house, and firms try to borrow to invest in good projects. As a result, once the debt to GDP ratio has begun to fall, and yet interest rates remain low, the power of alternative narratives like ‘it makes sense to borrow to invest in the future when borrowing is cheap’ will increase.

Yet responding to deficit fetishism by implying the deficit does not matter, or that we can print money instead, or even that we can grow our way out of the problem, is unlikely to convince many. [3] It just seems too easy, and contradicts people’s personal experience. The trick is to appear responsible on the deficit, but at the same time suggesting that responsibility is not equivalent to fetishism, and other things matter too. I think this provides a powerful motivation at this time for a policy that is designed to obtain balance on the current balance (taxes less non-investment spending) rather than eliminating the total deficit. This is far from ideal from a macroeconomic point of view, as I discuss here, but as a political strategy in the current context it has considerable appeal. In the UK it allows you to attack the ‘excessive and obsessive austerity’ of Osborne, who is ‘failing to invest in the future’, while following a policy that it is difficult to label irresponsible. [4]

Of course this policy was close to that adopted by Labour, the Liberal Democrats and the SNP at the last election, so many will just say it has already failed. I think this is nonsense for three reasons. First, the policy I’m advocating is a combination of targeting a zero current balance, and at the same time arguing aggressively against excessive austerity. Labour deliberately avoided being dubbed anti-austerity during the election. (The Liberal Democrats were handicapped by arguing for austerity for the previous 5 years as part of the coalition.) The only party to adopt an anti-austerity line was the SNP, and it did them no harm at all. Second, the reason Labour wanted to avoid pushing the policy at the election was that they felt they had tried this a few years before and failed, but as I argued in the previous post deficit fetishism only shrives in a particular context, and that context is passing. Third, what sank Labour on fiscal policy was that people swallowed the Conservative line that it was Labour’s profligacy that caused the need for austerity, essentially because this line went unchallenged for five years.

This last point is worth expanding on. Too many in the Labour party think that because many people now believe this idea, the best thing to do is pretend it is true and apologise for past minor misdemeanours (knowing full well it will be interpreted by everyone else as validating the Conservative line). This is almost guaranteed to lose them the next election. It will just confirm that the last Labour government was fiscally profligate, and the Conservatives will quote Labour’s apology for all it is worth. To believe that this will not matter by 2020 is foolish - it is the same mistake that was made in the run up to 2015. It is no accident that political commentators on the right are arguing that this is what Labour has to do. So the first task for Labour after the leadership election is to start to contest this view. They should follow the advice that Alastair Campbell is said to have given after 2010, and set-up an ‘expert commission’ to examine the validity of the Conservatives claim, and then follow through on the inevitable findings. [5]

I can understand why it may seem easier right now to avoid all this, adopt deficit fetishism and ‘move on’. But to do this accepts the framing of economic competency as being equivalent to deficit fetishism, and therefore forfeits a key political battleground to the right. In addition, once you accept severe deficit reduction targets, it becomes much more difficult to argue against the measures designed to achieve them, as on every occasion you have to specify where else the money would come from. (In the UK, that partly accounts for the disaster we saw on the welfare bill. In Europe it leads to the travesty of what was recently done to Greece, where Greece was only allowed to stay in the Eurozone at the cost of adopting harmful additional austerity.) As we have seen in the UK and elsewhere in Europe, there is a large amount of popular support for an anti-austerity line, and if the centre left vacates that ground the vacuum will be filled by others. Arguing against deficit fetishism (or in more populist terms ‘obsessive austerity’) while pursuing fiscal responsibility through a balanced current budget can become a winning strategy for the centre-left in Europe over the next few years.


[1] It is easy to forget that there is nothing that makes this the inevitable policy of the right. George W. Bush took the reduction in the US deficit under Clinton as a cue to cut taxes and raise the deficit.

[2] This sentence is just for those who like to ask why I tend to write more posts giving advice to the centre-left rather than to the right on this issue.

[3] I have argued for ‘QE for the people’, but always as a more effective tool for the Bank of England to stabilise the economy and not as a more general way for governments to finance investment. (Even if this becomes ‘democratic’ along the lines suggested here, the initiative must always come from the Bank.) As for growing your way out of debt, this is much closer to the policies that I and many others have argued for, but it may unfortunately be the case that at the low point of a recession this line is not strong enough to counter deficit fetishism.

[4] It was also the main fiscal mandate of the last coalition government, of course. This could be supplemented by targets for the ratio of government investment as a share of GDP. As long as these are not excessive, an additional debt or deficit target seems unnecessary.

[5] The question should not be ‘did Labour spend too much before the recession’, because that is not the line that did the damage. The question should be more like ‘did the Labour government’s pre-2008 fiscal policy or the global financial crisis cause the 2009 recession and the subsequent rise in the UK deficit?’  

Saturday, 23 May 2015

Consensus in macroeconomics

Paul Romer has continued the discussion he started, broadening it out from ‘mathiness’ to a more general discussion of how the subject is done. He describes what he regards as appropriate norms of science. The first few are I think uncontentious, but Stephen Williamson has taken exception to these two:

e) In our discussions, claims that are recognized by a clear plurality of members of the community by as being better supported by logic and evidence are the ones that are provisionally accepted as being true.

f) In judging what constitutes a “clear plurality,” we put more weight on the views of people who have more status in the community and are recognized as having more expertise on the topic.

Stephen writes:

This is absurd of course. We don't take polls to decide scientific merit. Indeed, revolutionary ideas - the ones that take the biggest steps toward Romerian truth - would be the ones that would fail, by this criterion.

I can understand that for those who typically work outside the mainstream, and indeed may be known for proposing new and challenging ideas, find this kind of talk threatening. Take it the wrong way, and it sounds like a recipe for conformity and stagnation.

I’m sure that is not what Paul intended, and I also think he is making an important point here. I suspect a natural scientist would see (e) and (f) as simple statements of how things are. In my experience natural scientists have a clear idea of what the “clear plurality” is on any particular issue, and are happy to admit it, even if they disagree with that plurality. There is nothing here that says academics cannot challenge the ideas of the “clear plurality”.

But why is it important to have an idea of what that plurality is and acknowledge it? I can think of three reasons. First, it presents an honest picture to those learning the discipline. Second, it is very important that policy makers are told which ideas are widely agreed and which are the views of a small minority. That does not stop policy makers going with the minority, but they should know what they are doing (as should voters). The public’s trust in economics might also increase as a result. Third, it helps the unity of the subject, mutual understanding and progress. It becomes clear why those who do not accept the views of the “plurality” disagree, and what they need to do to convince the plurality that they are wrong.

Convincing the majority that they are wrong is a strong motivational force for progress. In contrast, working within a small school of outsiders all of whom just know that the plurality is misguided, and as a result never bother to engage or keep up with it, is a recipe for stagnation. Before heterodox economists start hitting the keyboard, that also means that the plurality is open to unconventional ideas and do not just reject them because they are unusual or defy certain generally accepted norms.

It is for reasons like this that I have argued that it is wrong to say that macroeconomics is ‘flourishing’ simply because there are lots of different ideas/models out there. If there is no clear way of establishing which of these command general support and which are the insurgents, and what the insurgents need to do to overturn any consensus, then it is not clear how the discipline can progress.

This is all a bit abstract, so let me give an example from business cycle theory. Here there is, at present, a clear consensus theory, which is the New Keynesian model. I have been challenged on this in the past, but I would want to insist on it because I would attach a good deal of weight to those who are actually involved in business cycle stabilisation i.e. economists in central banks. (I’m not sure how important ‘status’ should be in Paul’s (f), but expertise is important, and having to put ideas into practice and responding to data all the time should count strongly.)

So when fiscal stimulus was used in 2009, those economists who opposed it should have said something like: I understand that temporary increases in government spending will raise output for given nominal rates in the dominant New Keynesian model, but I think that analysis is wrong because …. They should not have said, as some did, that fiscal stimulus was old fashioned nonsense. Whether they did this out of ignorance or contempt for the mainstream, it suggested that at least some prominent economists were not following the norms of science.


Saturday, 16 May 2015

Paul Romer and microfoundations

For economists

In an AER P&P paper, Paul Romer talks about many things: a distinction between scientific consensus and political discourse, a divide in growth theory between those that use models based on perfect competition and those using imperfect competition, but mainly the distinction between appropriate mathematical theory and what he calls ‘mathiness’. To better see how these things connect up, and how they could have wider applicability, I suggest reading his blog post first. There he writes:
“the problems I identify in growth theory may be of broader interest. If economists can understand what the problem is in this sub-field, we may be in a better position to evaluate the scientific health of other parts of economics. The field to which scrutiny might first extend is economic fluctuations.”
So how might such a comparison go? The attachment to using perfect rather than imperfect competition could map into an aversion to either price stickiness or the importance/autonomy of aggregate demand, both of which could be labelled as ‘anti-Keynesian’. Keynesian theory is denigrated in some cases not because of empirical evidence but because of the policy implications that may follow from that theory. The microfoundations methodology, as practiced by some, allows those that want to deny the importance of Keynesian effects to continue to study business cycles, because this methodology can place such a low weight on the importance of evidence when it comes to the elements of model building. (Ask not whether price stickiness has empirical support, but whether it has solid microfoundations.)

Paul Romer’s post also links to the idea in this paper by Paul Pfleiderer about theoretical models becoming “chameleons”. To quote: “A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.” I think we could add that these conclusions are usually associated with defending a particular political view or sectional interest.

It is important to stress that this is not an attack on the microfoundations methodology, just as Paul Romer’s article is not an attack on mathematical modelling. Most DSGE modellers, who are not subject to any political aversion to using price rigidity, happily use this methodology to advance the discipline. But if that methodology is taken too seriously (by what I call here microfoundations purists), so that modellers only look at what they can microfound rather than what they actually see in the real world, it can allow approaches that should have been discarded to live on, perhaps because they support a particular policy position.

A discipline where a huge number of alternative models persist could be described as ‘flourishing’, but risks disintegrating into alternative schools of thought, where some schools have an immunisation strategy that protects them from particular kinds of empirical evidence. As Paul perceptively points out, this makes economics more like political discourse than a scientific discipline. Some people welcome that, or regard it is inevitable - I hope most economists do not. This means we first need to collectively recognise the problem, rather than keeping our heads down to avoid upsetting others. I hope Paul Romer’s article can be part of that process.