My guess is that the majority of pieces of empirical work by economists will contain at least one error somewhere. Errors become almost inevitable when large and diverse data sets are involved, like those constructed by Reinhart and Rogoff and Thomas Piketty. So finding these errors is not headline news. Nor, for this reason, is it particularly embarrassing for the economists concerned when these errors are found, particularly if they have made their data public or available to others.
If you think that shows up empirical economists in a bad light, the best economic theorists can also make errors. One celebrated paper from my youth, Does Fiscal Policy Matter? by Alan Blinder and Robert Solow, contained an algebraic error (pdf). And if you think this shows up all economists, probably the most famous mathematical event of the last few decades - the Andrew Wiles proof of Fermat’s Last Theorem - contained what a journalist would call an error in its original form.
It is also often necessary to adjust data for a number of reasons. In an ideal world each adjustment would be carefully documented, but they rarely are. Of course official data series often involve many similar adjustments before they are published. If you can, it is always a good idea to talk to the statisticians involved in constructing your data before you use it, although it can put you off doing any empirical work ever again!
Errors and adjustments only matter if they influence key results. The Blinder and Solow algebraic error was not critical to the main results in their paper. The gap in the original Andrew Wiles proof was critical, but after what must have been an agonising year, he found he could bypass the problem and the revised proof was sound. The Reinhart and Rogoff spreadsheet error had a relatively minor impact on its own - the really important issues lay elsewhere.
With all this in mind, I have very mixed feelings about Chris Giles’s Financial Times splash. I applaud a journalist who is unwilling to take academic results or official figures on trust, and is prepared (and I guess has the resources) to get their hands dirty with data. Chris has consistently done this. For example, when David Cameron claimed mysteriously that George Osborne’s first austerity budget would increase public sector employment compared to Labour’s plans, Chris got to the bottom of how this trick was achieved. Yet I groaned when reading his latest FT article, with its emphasis on “mistakes and unexplained entries”. As far as I can see (read Ryan Avent here, and the longer Chris Giles post here, and Jonathan Hopkin here), the only issue of substance involves trends in the UK wealth income ratio, but of course an article headlined ‘Data sources on UK wealth income ratio differ’ would not have had the same punch.
Now you might say, as journalists always do, that people who become famous - including economists like Reinhart and Rogoff or Piketty - have to accept having their work treated in this way. They become ‘fair game’. I actually think that is wrong. Misleading reporting and commentary - by journalists or bloggers - is what it is: misleading. The fact that it can be commonplace does not excuse it. I understand the temptation to hype up simple spreadsheet errors even when they have no significant consequence, but I’m glad to say I did not succumb to this temptation in the case of Reinhart and Rogoff spreadsheet affair.
It is perhaps worth noting one other point. The Reinhart and Rogoff affair became notorious because governments had used this work to justify their austerity policies. The spreadsheet error was brought to light as a result of work by academics rather than by any journalist. In the case of Piketty, no policies have yet been implemented using the results in his book as justification. In that rather important sense, the two stories are different. Whether this asymmetry reveals anything of interest I will leave you to judge.