Winner of the New Statesman SPERI Prize in Political Economy 2016

Tuesday 12 August 2014

Policy Based Evidence Making

I had not heard this corruption of ‘evidence based policy making’ until I read this post by John Springford discussing the Gerard Lyons (economic advisor to London Mayor Boris Johnson) report on the costs and benefits of the UK leaving the EU. The idea is very simple. Policy makers know a policy is right, not because of any evidence, but because they just know it is right. However they feel that they need to create the impression that their policy is evidence based, if only because those who oppose the policy keep quoting evidence. So they go about concocting some evidence that supports their policy.

So how do people (including journalists) who are not experts tell whether evidence is genuine or manufactured? There is no foolproof way of doing this, but here are some indicators that should make you at least suspicious that you are looking at policy based evidence making.

1) Who commissioned the research? The reasons for suspicion here are obvious, but this - like all the indicators discussed here - is not always decisive on its own. For example the UK government in 2003 commissioned extensive research on its 5 tests for joining the EU, but that evidence showed no sign of bias in favour of the eventual decision. In that particular case none of the following indicators were present.

2) Who did the research? I know I’ll get it in the neck for saying this, but if the analysis is done by academics you can be relatively confident that the analysis is of a reasonable quality and not overtly biased. In contrast, work commissioned from, say, an economic consultancy is less trustworthy. This follows from the incentives either group faces. 

What about work done in house by a ‘think-tank’? Not all think tanks are the same, of course. Some that are sometimes called this are really more like branches of academia: in economics UK examples are the Institute for Fiscal Studies (IFS) or the National Institute (NIESR), and Brookings is the obvious US example. They have longstanding reputations for producing unbiased and objective analysis. There are others that are more political, with clear sympathies to the left or right (or for a stance on a particular issue), but that alone does not preclude quality analysis that can be fairly objective. An indicator that I have found useful in practice is whether the think tank is open about its funding sources (i.e. a variant of (1).) If it is not, what are they trying to hide?

3) Where do key numbers come from? If numbers come from some model or analysis that is not included in the report or is unpublished you should be suspicious. See, for example, modelling the revenue raised by the bedroom tax that I discussed here. Be even more suspicious if numbers seem to have no connection to evidence of any kind, as in the case of some of the benefits assumed for Scottish independence that I discussed here.

4) Is the analysis comprehensive, or does it only consider the policy’s strong points. For example, does the analysis of a cut in taxes on petrol ignore the additional pollution, congestion and carbon costs caused by extra car usage (see this study)? If analysis is partial, are there good reasons for this (apart from getting the answer you want), and how clearly do the conclusions of the study point out the consequential bias?

A variant of this is where analysis is made to appear comprehensive by either assuming something clearly unrealistic, or by simply making up numbers. For example, a study may assume that the revenue lost from cutting a particular tax is made up by raising a lump sum tax, even though lump sum taxes do not exist. Alternatively tax cuts may be financed by unspecified spending cuts - sometimes called a ‘magic asterisk budget’.

5) What is the counterfactual? By which I mean, what is the policy compared to? Is the counterfactual realistic? An example might be an analysis of the macroeconomic impact of austerity. It would be unrealistic to compare austerity with a policy where the path for debt was unsustainable. Equally it would be pointless to look at the costs and benefits of delaying austerity if constraints on monetary policy are ignored. (Delaying austerity until after the liquidity trap is over is useful because its impact on output can be offset by easier monetary policy.)

Any further suggestions on how to spot policy based evidence making? 


  1. What about the case of, say, the 'invisible bond vigilantes', in which a big fear is introduced and repeatedly cited by large numbers of politicians and journalists?

  2. I live in D.C. This place is chock-a-block with these handsomely sponsored little boutiques where thinking goes to tank. Thanks for writing this.

  3. Needs to be pre-determined timetable and process for evaluation, complete with metrics and criteria; all agreed with relevant balance of stakeholders. Can't think of a policy where this was complied with?

    1. I doubt you could ever find one, policy makers are drowned in opinion, research, ideology and random brain-farts throughout the process. As a result, the inevitable call from academia and those of us who actually like reasonably robust research is for more process, more evaluation; but the reality is all of those things is an increase in resources, time, money from already shrinking budgets.

      The challenge is to build a body of evidence that is reactive to and influential of governmental decision making, these new processes need to be both agile and understandable. I think it's achievable, but will need work from both sides.

  4. What about committing to the policy and testing it (amateurishly) at the same time?

  5. Two questions that would be important for me are:-
    1) What data (if any) are conclusions based on? plus of course who collected it, how and so on.
    2) Has the research been peer-reviewed and published, or is it just a report?

    1. Peer revue is no guarantee of quality, as has been shown many times in several fields, where papers have been withdrawn due to fraud.
      There's also the question of "pal revue" as shown frequently in the field of climatology and doubtless elsewhere.

    2. There is NO guarantee of quality anywhere, but as measures go for measuring the quality of research I would hazard that there is no better system out there.

      Also I would see the withdrawl of articles, as a sign that the system is working. If no research was withdrawn that would be more suspicious after all!

      Additionally it's "review" not "revue", which is something that happens in a theatre. If you're know enough about a subject to pass judgement on it you really ought to know how to spell it!

    3. Can I blame predictive text?
      Mind you, with some of the papers, a theatrical extravaganza is the best description of the contents!

    4. Hi anonymous 13 Aug 4.:47 - re spelling - check your last para :-)
      The issue with peer review is the length of time it takes for a peer reviewed research/evaluation process to take place. By then the findings have missed the policy-making boat. The window of policy opportunity can be fleeting.

  6. The reliability of academics depends, I think, on the field. The more vocationally-oriented academics (medicine, law, business) are less trustworthy, although I think the engineers are okay. Their ties with their industry are often a wee bit too close. Legal academics are often ideologues, who don't need to be bought and paid for, because they come pre-bribed.

  7. Ah yes, that well known global conspiracy by climate scientists to fool us into believing we are suffering from man made global warning. Much better to trust the 'research' produced by those funded by the carbon extraction industries.

  8. Are the workings on the web for all to see? (Ahem: Reinhart-Rogoff)

  9. Also See anything to do with Public health...

  10. Have i got this right ? You as an academic are opining that academic reports are of good quality, whereas your competitors are less trustworthy ? Is that due to the incentives you have ?

    i am just looking forward to seeing your comprehensive analysis and the key facts for this conclusion.

    I am just wondering if your analysis is as good as your sneer about research funded by carbon extraction industries. You must be aware of the vast amounts of research funded by the carbon extraction industries and others to support global warming research; there are after all extremely lucrative funding opportunities there. These pesky facts...

  11. OK, one last try. Its about incentives. The incentive for an economic consultancy is to be consulted, and producing research which goes against the views of the research commissioner is not great for business. The incentive for an academic is to publish in peer reviewed journals. Which, while far from perfect, does bias the academic to try and produce quality objective research.

  12. i appreciate your reply. You could plausibly argue that economic consultants only have a viable business model if they produce quality objective research. You are obviously not aware of that argument. Would you like to engage at all with the idea that economic consultants are competitors of academia plc, and that what you are doing is saying bad things about your competitors ?

    In spite of what you say, Universities frequently take large amounts of overheads from consultancy work. These overhead funds can have great effects on University viability, and individual academic promotion and remuneration. Given the importance of all that money to the University and the academic, "producing research which goes against the views of the research commissioner is not great for business" is your argument, and i don't see why it doesn't apply to academics.

    There is another incentive, which is peer-reviewed research, which you acknowledge as less than perfect. I can see direct, RAE motivated incentive for an academic to have high impact citations; but the objectivity or quality seem to be interesting arguments. Nature's recent editorial that they take author's word on trust doesn't inspire confidence in the quality assurance angle. I am just wondering if you can point me to any journal which has "objectivity" as a criterion for publishing a paper.

  13. Why would anyone trust research about global warming funded by carbon extraction industries more than they trust medical research about lung cancer funded by the tobacco industry?

    I'll take the academics, with all their flaws, over bold faced conflicts of interest any day.

  14. ...Much better to trust the 'research' produced by those funded by the carbon extraction industries...

    Given the reasonable tone of your piece, I'm surprised to see you repeating the regular propaganda lie that the fossil fuel industry funds anti-climate change research. If you were to investigate the funding process for yourself, you would find that the boot is massively on the other foot...

    Those of us who work in academia have one overriding interest. It is to ensure that funding is maintained. Publishing papers is simply one method of ensuring this. You must have noticed the quality of papers in ALL subjects dropping over the last 20 years - why do you think this is? Try looking at RetractionWatch to get a feel for what is happening on the front line or read that much-cited paper by John Ioannidis. There is a strong and growing incentive to cheat and fabricate data - and more and more academics are falling into this path.

    I assume that you make your assertions about climate change research based on belief rather than on reading the papers, which may well be out of your specialist area of expertise anyway. However, there is a simple opportunity for you to test your assertions. The Stern Review (readily available on the web) is the macro-economic paper discussing the options for addressing the issue, which politicians are following at present. I hold that the science behind the paper is fundamentally fraudulent, but that need not concern us - I am happy to suspend any disbelief for the purposes of this test, and just consider the paper on its macro-economic merits.

    You can readily obtain the Review. Here is a paper criticising it, written by a well known 'denier', and published by a well-known 'denialist' organisation - the Global Warming Policy Foundation.

    Read them both, and then tell me which one seems more correct to you. Note that the Stern Review was not a minor paper, but the main policy paper driving Western governmental policy response.

  15. I am very familiar with the Stern report. I read the section on discounting in the paper you linked too, because this is something I know about. Stern's analysis is clear and, on the issue of valuing the utility of future generations, correct in my view. The Lilley paper does not present a serious discussion on that issue. So I'm afraid that in this case, as in every other time I have looked at this, I find the quality of analysis from those arguing against action on climate change inferior to those arguing for such action.

    In particular, what I find time and again from those arguing against action is a tendency to work backwards from the result they want, using any means available. For example, Lilley notes that Stern would now make different assumptions that would raise the discount rate. However Lilley does not also say what Stern says, which is that this is offset by an increase in the estimates of the damage caused by climate change since he wrote his report.

    In fact the science is such that it is now clear to me that there is a small but significant chance that global warming could become so bad as to threaten the survival of the human race. Given that, it is criminal not to take action now to eliminate that possibility. I have talked to scientists who are skeptical of the science, but they agree that the precautionary principle implies we should act now.

  16. It is astonishing to see such childish antics from an Oxford professor.

  17. "threaten the survival of the human race"??????

    So, the solution is to slap the working and middle-class with a regressive energy tax that will ensure that their real wages, which have gone nowhere for over a decade, decrease more?

    Carbon-based energy, by virtue of its density, is the middle-class's best friend.

    To say otherwise is to threaten the economic viability of the middle-class.

  18. The new (Grantham) LSE institute will be headed by Lord Nicholas Stern, author of the UK government-commissioned report.

    Grantham is a $100 billion carbon trader who believes 'Global warming will be the most important investment issue for the foreseeable future. '

    Grantham give Stern his very own ratings agency to play with

    Lord Nicholas Stern, author of the UK’s Stern report on climate change, will launch a new carbon credit ratings agency on Wednesday, the first to score carbon credits on a similar basis to that used to rate debt.

  19. You are an even admit this yourself. In the real world no one takes you seriously.....basta. punkt, ende, aus!

  20. Personally, I find that if a politician is involved, then that's usually a pretty good indication we're looking at policy-based evidence.

    I genuinely don't think I can recall I time when a politician saw some evidence and then changed their mind about what the policy should be.

  21. Simon - interesting post.

    "policy-based evidence making" is/was a common term among junior Treasury officials - e.g. used with a knowing/wry smile to indicate "we know which way the wind is blowing, we'll just scrabble around to find some evidence"...

  22. @ Simon Wren-Lewis

    The climate trolls notwithstanding, I'd wish you could elaborate on this issue a bit more. It's not as if there were just the Stern Review on the one side, and on the other side Lilley. I submit that rather harsh criticism came from people like Dasgupta, Nordhaus, or Varian.

    Specifically, the issue was not with the near-zero PRTP, but its with the equally low elasticity of the marginal utility of consumption of unity that is difficult to justify for reasons of internal consistency - so much that almost everybody jumped immediately exactly on this issue. Also note that Schelling's "Intergenerational Discounting" goes against this reasoning - though it has nothing to do with the Stern review - simply on account of reasonably thinking about reality.

    I'd also submit that Weitzman was not happy with Stern's reasoning. He did, however, come to a reasonably similar result looking at discounting under huge uncertainties - the topic from which he developed his seminal Dismal Theorem. However, that was not the reasoning of Stern (with regard to the discount rate).

    Indeed, save John Quiggin and perhaps Delong, I know no one who didn't criticize Sterns specific argumentation in this regard, and I have no issue with the suggestion that Stern was somewhat of a game changer w/r/t the discount rate. However, this needs clear and detailed exposition and argumentation. It might well be true that there are very good arguments for Stern's choice, but to do as if this was a Stern against the deniers is a bit rich.

  23. I was specifically asked to compare two texts, which I did. To criticise me for doing so is, if I may say so, a bit rich! You are of course correct that there are many more interesting things to talk about, which I would love to when time allows.

  24. Well, let me take back that "rich" part with my apologies. What got me confused was the part were you say:

    "In particular, what I find time and again from those arguing against action is a tendency to work backwards from the result they want, using any means available."

    and then refer to Lilley's report just as an example.

    Indeed, one of the criticisms leveled against the Stern Review (see Nordhaus for example, who squarely identifies it as a political document) was that he lowered the discount rate in order to obtain his high SCC estimate. Anyway, historical precedence suggests that it is Stern who got a high estimate with a low discount rate, whereas higher discount rates were used (and still mostly still are used) up to then, so the idea you express in the above quotation that it is somehow the other way round seems a bit odd without further qualifications.

    Also, I do not really get the point about the precautionary principle. Weitzman's main point about the risk of catastrophic climate change was an impossibility theorem showing that CBA breaks down (under the specific assumption of a fat-tailed ECS). So, as a consequence, there really is no decision criterion, as optimisation falls flat. I don't really see the context to the Stern Report, which used conventional CBA analysis with the PAGE2002 IAM. He got the high SCC estimate with the low discount rate, not because of catastrophic risk (not in the model, anyway). You switch from the Stern Report to the precautionary principle as if it was somehow the same.

    Also, I'd like to hear, for once, a cogent and coherent argument what it means that we "have to do something now" as response to a precautionary principle. It's easy to invoke it when it comes to NOT do something (e.g. emit huge masses of sulfur oxides for geoengineering). I know that there are some papers considering this in the context of a minimax approach (in order to avoid the indeed catastrophic consequences of an overnight emissions stop, which one could understand to be implied by the precautionary principle); or indyck's recent NBER working paper, "Averting Catastrophes: The Strange Economics of Scylla and Charybdis", looking at this issue in a much broader context. But again, this is nothing Stern did, so why does he get so much invoked in this respect, as if "doing something now" where an actual policy?

  25. Interesting post indeed. A few thoughts.

    First, the ad hominem attack in (1) is unfair. A government genuinely interested in evidence based policy formulation would commission research. So would one only interested in supporting policies already decided on. Or are you saying government shouldn't commission research?

    Second, it's important to remember that journalists and academics can be every bit as biased as think tanks. Journalists frequently cherry pick, deliberately misunderstand and sometimes just make up evidence to support their (political) editorial policies. Likewise, academics have their political biases which can be just as damaging (Chicago school anyone?). And even those without political inclinations have strong incentives to find evidence that agrees with their own theoretical commitments.

    Finally, when you say 'policy makers' are you talking about politicians or officials or both. And while think tanks are well known to be biased according to their funding sources and political allegiances, I'd be interested whether you're including government researchers in your 'in house' definition. They are supposed to be politically impartial after all...

    1. On your first, that is precisely why I gave an example where genuine research was commissioned! On your second, that is why I put 'relatively' in italics. But I think it can be very dangerous when academics are assumed to be just as biased as everyone else - see climate change. Finally I agree that analysis done by officials need not be biased - if politicians allow it to be. Two of the examples I quote are work done by officials where assumptions do seem to have been chosen to show a particular policy in the best possible light.


Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.