Winner of the New Statesman SPERI Prize in Political Economy 2016


Monday, 15 October 2012

When policy ignores evidence: badgers and austerity


In the wood next to our house live at least one fox and maybe some badgers. We have seen badgers around, but it was only this summer that we first saw a badger walking (or more accurately sniffing) its way through our garden. It caused great excitement. On a walk from our house is someone who keeps on their smallholding a small number of alpacas. Except the last time we passed two were missing. They had been put down, because they had tested positive for TB. The risk that the others, and nearby cattle, might get the disease was too great. Tragically, autopsies found no trace of TB: the initial tests are not 100% accurate.

Badgers get, and spread, TB. As a result, the UK government is about to begin a large scale cull of badgers in Gloucestershire and Somerset. No one likes the idea of killing badgers. But cattle (or occasionally alpacas) dying from TB is no fun either. So the badger cull is just one of those necessary bad things that have to be done to prevent something even worse happening. Environmentalists are up in arms, but that is just because badgers look cute and cattle do not.

Except that is not what the evidence suggests. Following various small scale randomised badger culling trials, the UK government set up an independent group of scientists (the ISG) to evaluate the evidence. In 2007 the government published the report (pdf). It concluded as follows:

“The ISG’s work – most of which has already been published in peer-reviewed scientific journals – has reached two key conclusions. First, while badgers are clearly a source of cattle TB, careful evaluation of our own and others’ data indicates that badger culling can make no meaningful contribution to cattle TB control in Britain. Indeed, some policies under consideration are likely to make matters worse rather than better. Second, weaknesses in cattle testing regimes mean that cattle themselves contribute significantly to the persistence and spread of disease in all areas where TB occurs, and in some parts of Britain are likely to be the main source of infection. Scientific findings indicate that the rising incidence of disease can be reversed, and geographical spread contained, by the rigid application of cattle-based control measures alone.”
On Sunday 30 eminent UK and US scientists published a letter in the Observer. They write: “As scientists with expertise in managing wildlife and wildlife diseases, we believe the complexities of TB transmission mean that licensed culling risks increasing cattle TB rather than reducing it.” One of the signatories described the government’s policy as crazy, and suggested vaccination and biosecurity was a better solution. The Guardian reports the chair of the ISG as saying “I just don't know anyone who is really informed who thinks this is a good idea." The current government chief scientist said: "I continue to engage with Defra [the relevant government ministry] on the evidence base concerning the development of bovine TB policy. I am content that the evidence base, including uncertainties and evidence gaps, has been communicated effectively to ministers." In other words, ministers know what scientists are saying and have decided to ignore them.

So what is going on? One of the strongest pressure groups in the UK is the National Farmers Union (NFU), and many of their members naturally care a great deal about the health of cattle. To say that the NFU has a strong influence on policy at Defra is a bit like saying that the position of the sun has a strong influence on whether it is night or day. The NFU are convinced that culling badgers will reduce the incidence of TB in cattle, and government policy is following that belief, rather than the scientific advice it commissioned. (For more details, see George Monbiot here.) The BBC reports Defra Minister David Heath as saying "No-one wants to kill badgers but the science is clear that we will not get on top of this disease without tackling it in both wildlife and cattle." Dare I say weasel words.

I would not be the first to draw a link between research and evidence in epidemiology and macroeconomics. (Here is Peter Dorman on bees.) Neither is a science where experiments can be easily devised to definitively prove ideas right or wrong.  In both fields evidence can be messy. With austerity we did not have randomised trials: we had one almost globalised trial, starting in 2010, and one eighty years earlier. The evidence this time round is becoming clear: the harmful effects are much greater than many had assumed.

As I know about macroeconomics rather than epidemiology, I’m tempted to think the policy on badgers is the greater political sin. I’m all too aware of the conflicting messages the academic community have been giving policymakers. Although TB in cattle is an emotionally charged issue, I doubt that it attracts the ideological baggage that seems to infect macro. However, perhaps the two cases are not so different. The problem with austerity is that too many people of influence just know that high government debt is always and everywhere a bad thing. Too many think it is just obvious that when a country has difficulties in selling debt that must imply cutting it back as quickly as possible, in the same way that it is obvious that killing badgers must reduce the spread of TB. And perhaps too many people see badger culls as part of a battle between farmers and environmentalists, just as austerity is a weapon in a battle over the size of the state. 

Maybe we are just naive in thinking that as the evidence against austerity accumulates, and as those that were once for it change their mind, the policy will change. As Wolfgang Munchau writes (FT): “As hordes of frustrated European economists know only too well, macroeconomic analysis in general does not play a role in eurozone policy making.” So the policy goes on, in both the UK and the Eurozone, and I do not like to think about what might happen in the US if Romney wins. While I would never advocate a totally uncritical acceptance of the views of scientists, we are an awfully long way from that position, as unfortunately many badgers are about to find out. 

Sunday, 14 October 2012

The Burden of Government Debt


The recent ‘exchange’ on this topic (see Nick Rowe here, here, here and here, and apparently on the other side Dean BakerBrad DeLongPaul Krugman and Noah Smith) may just have confused many, so here is my attempt to unconfuse. I’m doing this because (a) the issue is tricky (as I know from my own experience) (b) I’ve written about this before (c) I happen to be teaching this stuff right now (d) Nick Rowe needs some support (although no help). Bottom line: government debt can be a burden on future generations (the current generation can use it to take resources from future generations) even if there is no impact on future output, but it is also likely to reduce future output, so we really should worry about the size of government debt in the longer term. But none of these worries applies when the economy is demand constrained, as it is right now.

There are a number of ways that the current generation can exploit (take resources away from) future generations, and paying themselves using the device of government issuing debt is one. This can happen because generations overlap, and even if there is no impact on future output. Take the most basic example. At any point in time there are two generations: old and young. Only the young work and they produce 2000. They plan to save 1000 for their retirement, so consuming 1000 in each period of their two period lives. (Ignore interest payments for simplicity.) Now the old at time T get a gift of 100 from the government, paid for by issuing debt to the young. Suppose the young cut their consumption to 900 to purchase this debt. The old at T are better off by 100, and consume 1100 at T. The gift effectively comes from the young at T, but as the young bought an asset, they may think they will be OK at T+1, when they sell that asset and have consumption that period of 1100. The government then pays back its borrowing at T+1 by taxing the now (at T+1) old by 100. The T+1 old do indeed get their money back by selling the asset, but they are also taxed, so their consumption is 1000 in T+1, not 1100 as they had hoped. The economy at T+1 produces and consumes as much as it would have without this temporary creation of government debt, but the sum of consumption in both periods of the old in T+1 is 1900, while the sum of consumption in both periods of the old at T in 2100. It is as if the young just paid the old 100 at T, which is what would happen in a one-off unfunded social security scheme.

Now have the debt paid back at T+2 rather than T+1. The young at T, who are the old at T+1, are indifferent: they consume 900 at T and 1100 at T+1. The young at T+1 buy the debt from the old at T+1, and consume 900. It is when they become old at T+2 that they are worse off, when taxes rise to pay back the debt. If we call ‘society’ at some date the combination of the two generations living at that date, then we can say society at T is better off and society at T+2 worse off. But what if the debt is never repaid? I have misgivings about the relevance of this thought experiment (see here), but for what it is worth the standard result is that future generations get exploited if the real interest rate is greater than the rate of growth of the economy. However, no exploitation need occur if the debt is used not to increase the consumption of the old, but to invest in assets that future generations can benefit from (see here).

All this assumes that there is no impact on what society can produce. But that is not a realistic assumption. In the example above I just assumed that the young would be happy to postpone 100 of consumption to buy some government debt. But suppose instead they were not, and substituted that debt for 100 of their 1000 saving for their retirement. If that saving was in the form of productive capital, then the government debt ‘crowds out’ that capital. Now one for one crowding out like that is probably too extreme, but I think there are good reasons to believe that some crowding out occurs when investment in capital is governed by the availability of savings. This effect occurs not because government debt is in any sense necessarily bad – you can get exactly the same effect from investment in housing depending on who inherits houses. Indeed, if society has too much capital (which is theoretically possible), having government debt crowd out capital would be a good thing. However the balance of evidence is that society probably has too little rather than too much productive capital, which means we should be concerned about the long run impact of government debt.

Need this have any relevance to the debate on stimulus versus austerity? Absolutely not: indeed quite the reverse. No sensible person is arguing that current increases in debt should be permanent – instead they should be reversed, gradually, once the economy recovers. Until the economy recovers, investment is being held back by lack of demand, not a lack of savings, so the crowding out issue does not arise. I think you could make a good case that, by stifling the economy today through austerity, you are damaging productive capacity in the future. So in the current circumstances it is austerity, not increasing debt, which is harming future generations.

Friday, 12 October 2012

What do people mean by helicopter money?


Following a speech by one of the front runners to replace Mervyn King as Governor of the Bank of England, there has been renewed talk about helicopter money. Helicopter money involves the central bank printing money, but that in effect is what Quantitative Easing (QE) does, so what is different about helicopter money? There seem to be two rather different things that people might have in mind.

The first difference is where the money goes. QE, in the UK and to some extent in the US, involves the central bank printing money to buy government debt. Helicopter money is like the central bank sending a cheque to everyone in the economy. The second difference is whether the creation of new money is permanent or temporary. QE, if you ask central bankers, is temporary: when the economy picks up and there is the first sign of inflation, QE will be put into reverse (except, just maybe, in the US). Helicopter money is thought to be permanent: the central bank is sending out cheques, not loans.

Let’s take the permanent/temporary issue first. As I have argued before, calls for money creation to be permanent are in effect calls to increase inflation above target at some time for some period. The reason why most people believe QE is temporary is because (with the possible recent exception of the US Fed) central banks are sticking firmly to their inflation targets. There may be very good reasons why central banks should instead allow inflation to exceed those targets as the economy recovers. But if this is the issue, why not just call for higher inflation? Surely it makes sense to be explicit about what is trying to be achieved, particularly as the benefit involved in higher inflation for the real economy comes from increasing inflation expectations. We gave up money targets long ago, and quite rightly so. For the central bank to destroy some proportion of the government debt they now own and just hope this gave them the amount of extra inflation they desire would be like going back to money targets.(1)

The first issue, of where the money being printed is going, is more interesting. It reflects an understandable view that it would be better to print money and give it to consumers who would spend it (helicopter money), rather than using it to buy government debt (QE) which may reduce long term interest rates which may help stimulate the economy. The second route has been tried and has not been that successful, so why not try the first route?

It is useful to think about the circumstances in which the two routes are exactly equivalent. To focus on this, let us assume helicopter money is temporary: the central bank sends out cheques, but the government says it will get the money back in a few years time by raising a poll tax. (This is like the proposal from Miles Kimball.) If consumers are Ricardian, they will save all the amount of the cheque, because only by doing so can they pay the future poll tax without cutting their consumption. How will they save the money? Let us suppose they buy government debt. Then this is exactly the same as QE, except that consumers hold government debt temporarily instead of the central bank. This seems to be what David Miles had in mind in the speech that Stephanie Flanders refers to here, when he says: “If helicopter drops of money are reversed when their impact shows up very largely in prices and not in activity, the economic difference with conventional QE largely evaporates.”

Yet we can now see why in reality the two may not be equivalent, because consumers may not be Ricardian. In particular, some may have been asking their local bank for a loan to buy a car, and the bank had refused because it has become very risk averse since the crisis. For these credit constrained folk, the central bank’s cheque is just like the loan they couldn’t get. So they use the cheque to buy the car, and reduce their future consumption to pay the poll tax later. Instead of buying government debt, they have bought something real, which will increase aggregate demand for sure.

If this is the reason people call for helicopter money, then I have a lot of sympathy but only one problem:  what difference is this from an expansionary fiscal policy combined with further QE? Instead of the central bank sending people cheques, the government can send the cheques using money borrowed by selling debt, and the central bank can buy the debt by printing money (i.e. QE). In this sense, helicopter money is just another name for a fiscal stimulus combined with QE. We have the QE, so why not call for fiscal stimulus rather than helicopter money?

 (1) There may be a case for announcing (in some form) higher future inflation, and then destroying the debt, because that commits the central bank to higher inflation. What does not make sense is to destroy the debt and pretend you are not going to increase inflation at some point.

Tuesday, 9 October 2012

Multipliers: using theory and evidence in macroeconomics.


Paul Krugman and Jonathan Portes have picked up on the IMF’s recent analysis (see Box 1.1) of multipliers. The IMF (or more specifically Olivier Blanchard and Daniel Leigh) say

“... earlier analysis by the IMF staff suggests that, on average, fiscal multipliers were near 0.5 in advanced economies during the three decades leading up to 2009. If the multipliers underlying the growth forecasts were about 0.5, as this informal evidence suggests, our results indicate that multipliers have actually been in the 0.9 to 1.7 range since the Great Recession.”
Jonathan paraphrases this as the IMF saying: "Delong et. al. were right; we were wrong". They, and many others besides, underestimated the impact of austerity because multipliers were underestimated.

The point I want to pick up on is why 'Delong et. al' got it right. In most cases it was not because we had undertaken a superior analysis of the empirical evidence. Instead we were thinking about basic macroeconomic theory. In particular, we recognised that a world where nominal interest rates were fixed, either because they cannot go below the Zero Lower Bound (ZLB), or because the rate is determined for the Eurozone as a whole, is very different to a world where monetary policy is unconstrained.

One of the things I did when I recently spent a week at the European department of the IMF was talk about multipliers. What I said was a version of this post. With fixed real interest rates the starting point for the government spending multiplier in New Keynesian theory is one, and most elaborations make it larger than one. These elaborations include that austerity will reduce inflation, which with fixed nominal interest rates could mean higher real interest rates.[1]  You can then add something for hysteresis effects, or any supply side effects if the government spending is investment rather than consumption. You can also add an effect from credit constrained households which, as Paul Krugman points out, are a key feature of deleveraging. So theory suggests something significantly larger than one, which is exactly what the IMF now finds.

It would be overstating things to say that the IMF's analysis proves this theory is correct. It is just not detailed enough to be a very good test. New Keynesian theory suggests austerity achieved through temporary income tax increases will have a smaller multiplier, as I discussed here.  Tax changes that impact directly on inflation will have different impacts, as I suggest here. The IMF's analysis looks at the budget deficit as a whole, so it cannot discriminate in this way.  However New Keynesian theory does suggest that multipliers can be large, and the IMF analysis suggests they have been large.

Those of us who got it right may therefore have simply had the insight to use standard theory. Those that used multipliers of around a half for fiscal consolidation packages that leaned heavily on spending cuts seemed to discount this theory, and instead may have been following evidence that was not applicable at fixed nominal interest rates.[2] . While there is plenty wrong with New Keynesian theory, it is also important to note when it gets things right. It is also important to note an occasion where thinking about macroeconomic theory can be rather more useful than naively following the evidence of the past.



[1] As my post made clear, government spending on domestically produced goods will have the same multiplier in an open economy. Although the multiplier will be lower because government spending will have some import content, any real interest rate effect will be larger in a flexible exchange rate open economy because it they influence the real exchange rate.
[2] Antonio Fatás argues that there were plenty of earlier empirical studies of multipliers that suggested numbers well above 0.5. In my view it makes little sense estimating multipliers that do not control for the reaction of monetary policy. 

Monday, 8 October 2012

DSGE critics and future directions for macro


Microfounded macromodels, aka DSGE models, hold a dominant position in academic macro, and their influence in central banks is increasing. (The Bank of England’s core model is DSGE, but the approach has not yet quite achieved a similar dominance in the Fed or elsewhere.) At the risk of gross oversimplification, you can class the critics of this situation into two groups: the reformers and revolutionaries. The reformers (like myself) see DSGE analysis as always forming a central part of macro, but want greater diversity, with in particular more analysis using time series econometrics. The revolutionaries want to confine DSGE analysis to a much more minor role, if not the bin.

In a sense debate between these critics is a bit pointless. We are standing on the same train platform, agreed on the direction of travel, but at the moment the train shows no sign of moving. There is a danger that we spend too much time arguing about when the train should stop, and not thinking enough about how to get it going in the first place. Nevertheless I think it is worth having the debate, if only because of tactics. Those DSGE modellers who are sympathetic to reform can easily become defenders of the status quo in the face of more extreme attacks.

So let me give one argument for reform rather than revolution that I have only made implicitly before. When I studied macro and then began working as a macroeconomist, mainstream macro was divided into schools of thought. In an environment where both inflation and unemployment were high, you had monetarists saying that you just needed to control the money supply, some Keynesians arguing that we should focus on unemployment because it had nothing to do with inflation, and New Classicals saying unemployment was not even a problem. Each school had its models, and each claimed empirical backing. Econometric analysis was not strong enough to discriminate between schools. Different schools tended to talk across each other, and anyone trying to look for common ground or ultimate sources of disagreement had a hard time, and ended up writing lists. For a policymaker or student it must have seemed like a nightmare, and no wonder many chose which school to follow based on its ideological associations.

In my view microfoundations brought some order to this chaos (see this from here). Now for heterodox economists who think the microfoundation approach is fundamentally flawed, this is a problem: we are looking at alternatives through the wrong lens. But for those who think that, for at least some problems, basic micro reasoning is a good place to start, microfoundations provided a common language with which to discuss and appreciate different points of view. Note that this is not an argument for complete synthesis, but just a shared language.

As Diane Coyle noted about the conference we both recently attended, the UK’s social science funding agency (the ESRC) is considering what kind of research in macro is needed post crisis, and therefore what funding initiatives might be appropriate. Here I want to present a cautionary tale. Macro is dominated by US economists of course, but one area where the UK was strong was in the building and empirical evaluation of econometric macromodels. This reflected strength in time series economics (David Hendry, Hashem Pesaran, Andrew Harvey to name just three), but was embodied in the ESRC Macroeconomic Modelling Bureau, directed by Ken Wallis from 1983 to 1999. However with the intellectual tide moving ever more strongly in favour of calibrated DSGE models, macro papers by those involved with this area were not hitting the top journals. Partly as a result, the ESRC (which really means the academic and other macroeconomists advising the ESRC) decided to discontinue funding for the centre.[1]

I thought that was a huge mistake at the time, and that conviction has been reinforced by recent events.[2] What the Bureau did was bring modellers from policy institutions and academics together around the concrete endeavour of comparing the models used by those institutions. At the very least, modellers became aware of alternative perspectives, and models used by policymakers were subject to critique. This has now been lost.  The moral I draw from this mistake is that it is dangerous to sacrifice strengths to fashion. The UK retains strengths in time series macro: one of the strongest papers at the conference was presented by John Muelbauer, whose work on financial liberalisation and consumption I have discussed before. However the UK also has a number of economists producing strong work in the DSGE tradition, and this should also be encouraged. What the UK really lacks (and the key message from the report Diane cites) is academic macroeconomists, and the reason for that is for another post.  





[1] The Centre was co-funded by the Treasury and the Bank of England, and the absence of strong support from these institutions may also have been important in this decision. Both institutions were of course subject to the same intellectual tide, and may have had mixed feelings about being open to external critique.
[2] Unfortunately this was not the first time lack of support from the ESRC killed off a very innovative and productive macro research team. Many of the issues involved in optimal policy analysis in rational expectations models were first investigated by David Currie and Paul Levine in the 1980s, but funding support for this team was not renewed by the academics advising the ESRC. 

Sunday, 7 October 2012

Paternalism and Irrationality


I have talked before about how most economists have an instinctive dislike of paternalism: a dislike of the idea that someone (usually someone in authority) knows better what is good for people than people themselves. I think this is a very good instinct to have, but sometimes it has to be set to one side. Economists should know this, because they often use economic theory in a paternalistic fashion.

How can I write this last sentence? After all, are economists not very careful to focus on agent’s revealed preferences, rather than any objective model of what is good for people? They prefer what people actually choose as measures of well being (like consumption), rather than some scheme of what is involved in the good life dreamed up by some philosopher.  

However, as Daniel Hausman discusses in a recent book, the idea that a preference based measure of individual welfare can ever be a completely adequate measure of individual well-being is deeply problematic. There are many reasons for this, but the one most economists recognise is where individuals clearly act in ways that are not in their own self interest (or in the interest of others). The example I used before is seat belts.  But behavioural economics and experimental data are giving us many more. For example, an individual’s choice may be influenced by the presence of irrelevant alternatives: the choice between A and B may be influenced by whether C is an option, even though A and B are both preferred over C.



These features of behaviour pose an obvious problem for any preference based measure of well being. What economists typically do, faced with this dilemma, is one of two things. The first may be to deny the premise that agent’s choices in these cases are inconsistent with their own self interest. Failing that, the second response is to try and correct the preference measure of welfare to get round the problem. For example, if someone’s preference for not wearing seat belts is due to ignorance about the statistics, then we might be justified in imagining what their views would be if they did have this information. We can talk about welfare as involving maximising preferences that are not based on false beliefs. The idea is that maximising these purified preferences then maximises individual well-being.

There are a lot of problems with this approach, which Hausman discusses in detail. However, it strikes me that once this attempt is made, economists themselves tend to be paternalistic. Because how do we judge whether preferences need correcting? Often we simply ask whether choices are consistent with rational choice theory i.e. the basic axioms of much of microeconomics. If they are not, then any preferences that violate these axioms need correcting. What we are doing here is elevating rational choice theory, or more generally the micro theory we typically use, to an objective theory of well being.

Let me give one final, and rather more complex, example. There is a lot of evidence (and has been for some time e.g. Ainslie (1992), Picoeconomics, CUP.) that individuals discount over time in a hyperbolic way, rather than in the simple exponential manner which captures impatience as a constant discount factor. Hyperbolic preferences mean that if the choice is between A now and B in x periods time, we may choose A, but if it is between A in y periods time, and B in y+x periods, we choose B. Now hyperbolic preferences cause problems because they are clearly time inconsistent. From a welfare point of view, we have to ask who is the individual whose welfare we are maximising: the individual today with one set of preferences, or the individual tomorrow with different preferences. Faced with this dilemma, economists and governments typically ignore the preferences agents actually have, and carry on using a constant discount rate. Yet if this is not what people actually do, does it make sense to discount at all?    

In this and many other ways, economists appear quite happy to ‘correct’ the preferences people have, and instead make judgements on the basis of the preferences they should have. That may be the sensible thing to do, but it seems pretty paternalistic to me.

Thursday, 4 October 2012

Was the financial crisis the fault of DSGE modelling?


I am, like many, in awe of Bank of England director and economist Andy Haldane. However I did wince a bit at his recent Vox piece. He looks at the extent economists are to blame for the financial crisis, and he makes two interrelated claims. Having noted that central banks have traditionally been concerned with the “interplay of bank money and credit and the wider economy”, he suggests that this changed in the decade or so before the crisis. He then says

“Two developments – one academic, one policy-related – appear to have been responsible for this surprising memory loss. The first was the emergence of micro-founded dynamic stochastic general equilibrium (DGSE) models in economics. Because these models were built on real-business-cycle foundations, financial factors (asset prices, money and credit) played distinctly second fiddle, if they played a role at all. 
The second was an accompanying neglect for aggregate money and credit conditions in the construction of public policy frameworks. Inflation targeting assumed primacy as a monetary policy framework, with little role for commercial banks' balance sheets as either an end or an intermediate objective. And regulation of financial firms was in many cases taken out of the hands of central banks and delegated to separate supervisory agencies with an institution-specific, non-monetary focus.”

There is obviously some truth in this, but are these really major factors behind the financial crisis? Imagine looking at the following chart in 2005 or 2006. The increase in leverage that began in 2000 is both dramatic and unprecedented. (Much the same is true for the US.) Was this ignored because central bankers said this variable is not in their DSGE models? In my experience those involved in monetary policy look at a vast amount of information, particularly on the financial side, even though none of it appears in standard DSGE models, and even though their ultimate target might be inflation. For some reason monetary policy makers discounted the risks this explosion in leverage posed, or felt for some reason unable to warn others about it, but I very much doubt these reasons had anything to do with DSGE models.

UK Bank Leverage. Source: Bank of England Financial Stability Report June 2012
           
I say this because to place too much weight on the culpability of DSGE models and inflation targeting can lead to overreaction, and may sideline more fundamental issues. (I don’t, by the way, think Haldane himself falls into either trap: see this interview for example.) Let me take overreaction first. It is one thing to claim, as I have, that the microfoundations approach embodied in DSGE models encouraged macroeconomists to avoid modelling difficult (from that perspective) issues like the role of financial institutions in credit provision. It is quite another to suggest, as some do, that DSGE models are incapable of doing this. This second claim was false before the crisis (e.g. Bernanke, Gertler & Gilchrist, 1999), and has clearly been shown to be false by the post crisis explosion of DSGE work on financial frictions. Forming a rough consensus around a reasonably simple and tractable model of the crisis that can also assess the subsequent policy response will not happen overnight (it never does), and I suspect it will involve tricks which microfoundation purists will complain about, but I’m pretty certain it will happen.

Andy Haldane talks about the need to model the interconnections (networks) of actors and institutions in order to understand how sudden crises can emerge. This must be right, and recent work[1] that begins to do this looks very interesting. However what seems to me critical in avoiding future crises is to understand why leverage increased (and was allowed to increase) in the first place, rather than the specifics of how it unravelled. As I suggested here, we may find more revealing answers by thinking about the political economy of how banks influenced regulations and regulators, rather than by thinking about the dynamics of networks. We should also look at the incentives within banks, and why short term behaviour in the financial sector may be increasing, as Haldane himself has suggested. Investigating networks is clearly interesting, important and should be pursued, but other avenues involving perhaps more conventional economics and political economy may turn out to be at least as informative in understanding how the crisis was allowed to develop.

Postscript

I wrote this before reading this from Diane Coyle, which is well worth reading if you think all microeconomists must be in favour of DSGE. We both went to the conference she mentions, and my own rather different reactions to it partly inspired my post. I'd like to say more about this quite soon.






[1] See, for example P Gai, A Haldane and S Kapadia, Journal of Monetary Economics, Vol. 58, Issue 5, pages 453-470, 2011, and other recent work by Kapadia.