When heterodox economists want to argue against the mainstream, they
typically
start by talking about the failure to predict the financial crisis.
This can be done in two different ways. One is to invoke Minsky, and
talk about risk perception, leverage and the failure of the banking
sector. I think this argument is accepted by mainstream
macroeconomists, which is why there has been an explosion of DSGE and
other microfounded analysis putting a financial sector into
macromodels.
I just want to make two points on this. First, it is a sin of
omission rather than signifying some fundamental flaw in existing
macroeconomic theory. That is why financial sectors can be added to
existing DSGE models: nothing has to be torn up and thrown away.
Second, the fact that leverage was allowed to increase substantially
before the crisis was not something that most macroeconomists were
even aware of let alone approved of. As I have said before, if I had
seen a chart showing bank leverage (like the one shown here)
before the crisis I would have been extremely worried. It is true
that Rajan’s warning
in 2005 was famously knocked down by Summers, but it is a mistake to
assume that most academic macroeconomists either agreed or disagreed
with Summers: it just wasn’t their field. As Andy Haldane says,
before the crisis
“prudential regulation was a niche academic issue. In my view, this absence of academic debate and challenge contributed, in no small measure, to international prudential standards being set at levels which were, with the benefit of 20/20 hindsight, not just too low but ridiculously too low.” (my italics)
There is a second strand to the heterodox attack on mainstream
economists concerning the financial crisis, and that invokes
not Minsky but Wynne Godley. Here I think heterodox economists also
have a point, but they tend not to put it very well and, perhaps as a
result, it has not yet impacted on the mainstream. Godley had, since
the start of the millennium, raised alarm bells about rising US debt.
The US savings ratio had been falling since the 1980s, but the
personal sector had only gone into deficit at the end of the 1990s.
(Michalis Nikiforos has a discussion.)
Godley said this was not sustainable, and eventually he was proved
right.
It is at this point that most heterodox accounts go wrong. They focus
on the model he used to do his analysis, a model that tracks sector
balances and their implications for sector wealth, but which
otherwise has minimal behavioural content. This has become the
“Stock-Flow Consistent methodology”, and it is inferred that
mainstream models fail to impose stock flow consistency. But this is
accounting, not economics, and was not unique to Godley. When I was a
young economist at the Treasury in the 1970s, their UK model was
‘stock-flow consistent’, and forecasts routinely looked at sector
balances. The model I built in the 1990s, which unlike the Treasury’s
model included many of the theoretical features we now associate with
New Keynesian models, also tracked sector balances. In terms of
theory these were mainstream macromodels, but not microfounded
macromodels.
It is trivial to add this accounting to any macromodel. The reason
why it is typically ignored when it comes to the personal sector is
because in most mainstream models these balances are of no
consequence. Steve Keen points
this out, but does not take the next step of asking why this is. The
answer is because of the simplicity of the dominant mainstream model of intertemporal
consumption, where there is no desired level of wealth or debt which
consumers try to attain.
To understand the behaviour of both consumption and financial
balances over the last few decades you need to understand the
changing nature of credit availability from the financial sector. The
best analysis I have seen of that comes from mainstream
macroeconomists, and in particular Chris Carroll and John Muellbauer.
(I first wrote about this here,
but subsequently here.)
Simply having a fixed proportion of credit constrained consumers does
not get you there, because it cannot model what happens when credit
constraints change.
Why did most mainstream macro ignore this work, and as far as I can
see continues to ignore this work? It is not because their analysis
sweeps aside the basic intertemporal consumption analysis of
mainstream theory. Carroll’s analysis builds on that model and adds
some basic real world elements like finite lives and income
uncertainty. I think this work was/is ignored because incorporating
this analysis into microfounded models raises serious problems
associated with heterogeneity across age and income. As Carroll shows
it is possible to do, but not easy to do. Playing around with habits
or some ‘rule of thumb consumers’ is much easier. Blanchard has
recently
made a similar point in his critique of DSGE models.
In fact I would go further than this. I argue here
that it was this microfoundations methodology that allowed mainstream
macro to ignore the fall in personal savings in the US that preceded
the crisis, because it is a methodology that allows you to be very
selective about what empirical features you do or do not explain. As
the best explanation for this decline in the savings ratio is easier
credit, then any general equilibrium analysis would require modelling
the financial sector. For this reason I speculate that had the
microfoundations revolution been more tolerant of other methodologies
(as it was in the UK until the end of the 1990s), macroeconomics may
well have done more to integrate the financial sector into their
models before the crisis. That is a rather different critique than
the one typically offered by heterodox economists, but it is no less
fundamental.
Mainstream
macroeconomics addiction to microfoundations methodology has given
heterodox economists an opportunity. If mainstream macro continues to
shun what it calls policy models (models that use aggregate
relationships justified by an eclectic mix of theory and data), then
this space can be occupied by others. But to do that heterodox
economists have to stop being heterodox, by which I mean defining
themselves by being against almost all mainstream theory. As Jo
Michell writes
“The problem with heterdox economics is that it is self-definition
in terms of the other”. As the scope and diversity of mainstream
theory gets larger and wider, the space that can be occupied by those
who reject the mainstream shrinks.
SWL,
ReplyDeleteIt's quite inaccurate to portray stock-flow consistent model as "minimal behavioural content". Noah Smith on Bloomberg said the same thing and then few sentences later said that it has too many parameters.
It's simply not too little behaviour. If you go deep in Godley/Lavoie's models, you'll see a lot of behavioural hypotheses and analysis around it.
I think you are absolutely right that mainstream macro simply wasn't paying attention. I am going to repeat a point I have made many times and can never understand why it isn't one of the first things that is said about mainstream macro, and that is that mainstream macro never gave much thought to where shocks come from and occupied itself with the question of how to respond to them, and when the crisis hit, everybody asked, why didn't you guys see this coming - to do that, one would needed to have modeled the sources of crises, which mainstream macro simply was not trying to do.
ReplyDelete"This has become the “Stock-Flow Consistent methodology”, and it is inferred that mainstream models fail to impose stock flow consistency. But this is accounting, not economics, and was not unique to Godley."
ReplyDeleteI don't really understand what you're arguing here. Godley & Lavoie are pretty explicit that their approach follows the kind of macroeconometric/accounting approach used by economists in the 1970s, but think that it was a mistake for macro to retreat from this entirely - same as you.
Also, SFC models have plenty of behavioural equations; they're just not the same as mainstream ones. One (growing) strand of SFC merges with ABM, which is a much better way of microfounding models than simply insisting the macro behaves as a micro agent, which is what the mainstream do.
When you say DSGE models can include a financial sector, can you give an example? I've seen credit 'frictions' such as cost of collection, and models which impose some exogenous shock on borrowing, but nothing like the kind of financial sector you get in SFC models, which is embedded into the models from the start rather than imposed on them.
In any case, there's a big element of post hoc tinkering in all of this. Godley, Minsky, Keen etc. made ex ante predictions about the crisis based on their models and were right. After the crisis, mainstream economists have insisted that their models can account for the things that caused it. But is there anything that could happen which couldn't be rationalised within a DSGE framework after the fact? Are these new DSGE models making any novel, out of sample predictions that can help us understand the post-crisis world? Or are they just restating things we already know within a particular framework?
Jakab-Kumhof (2015) Bank of England Working Paper is a good example
Deletehttp://www.bankofengland.co.uk/research/Documents/workingpapers/2015/wp529.pdf
our paper with Michael Kumhof tries to include banking sector into a DSGE in a non-mainstream way
Deletehttp://www.bankofengland.co.uk/research/Documents/workingpapers/2015/wp529.pdf
if you think that mainstream macro needs to learn from the crisis change, doesn't that mean you want to see "post hoc tinkering" and also isn't being able to rationalise things within a DSGE framework a good thing? How would it be better if the DSGE was unable to account for financial crises?
Deletealso, what do you have in mind by novel out of sample predictions? A model might say, here's why I think crises happen and thus here's what I think could be done to prevent them. For example. What's the out of sample prediction you are going to test how?
@Luis
Delete'...if you think that mainstream macro needs to learn from the crisis change, doesn't that mean you want to see "post hoc tinkering" '
No.
What is being asked for is a different theory. Initially, of course different hypotheses.
'... isn't being able to rationalise things within a DSGE framework a good thing? '
No. It is probably a fatal flaw.
I suspect you are confusing an accounting argument for a modelling/theoretical argument. See points (3) and (4) below.
Also, from Narayana Kocherlakota:
On the Puzzling Prevalence of Puzzles
------------------
(1) It is easy to obtain confirmations, or verifications, for nearly every theory-if we look for confirmations.
(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory--an event which would have refuted the theory.
(3) Every 'good' scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
[Karl Popper, Conjectures and Refutations]
One doesn't have to take every thing Popper says about falsification and esp induction too literally. Falsification is a much more nuanced issue than the above quote would suggest. However to a first approximation the points above are taken seriously across scientific and engineering disciplines.
NB: I cannot speak for heterodox economics because I have very little familiarity with various varieties it. I am coming from outside economics.
"What is being asked for is a different theory"
Deletethat's begging the question. If you don't start by assuming DSGE needs discarding, you don't object to attempts to see whether DSGE can be adapted to make it fit for purpose
edit:
DeleteLink to Kocherlakota:
On the Puzzling Prevalence of Puzzles
https://sites.google.com/site/kocherlakota009/home/research/puzzling-puzzles
further: you can rationalise everything with an argument constructed of words, you can rationalise everything with a mathematical model that does not try to build from microfoundations and impose whatever else characterizes DGSE, whatever else 'heterodox' economists have to offer is going to be capable of rationalising everything. Nobody would say a "fatal flaw" of using words is that they can ratiionalize everything. The point is that you have to *say something* that has some content, and if that thing is falsified then that thing you said is wrong. How is working within the framework known as DSGE different?
DeleteAnd how exactly do you think the DSGE framework would be better in any sense, if it was incapable of capturing endogenous crises in the financial sector?
Herman expressed most of my viewpoint quite clearly, so I won't dwell on every point.
Delete"A model might say, here's why I think crises happen and thus here's what I think could be done to prevent them. For example. What's the out of sample prediction you are going to test how?"
As I said, some testable predictions for what happens after the crisis would do the trick. Basically anything that makes a 'prediction' beyond being model #243 to get something that sort of looks like the crisis in response to an exogenous shock, which is no explanation at all.
That BoE paper is the best attempt I've seen, but ultimately DSGE is hamstrung because there always has to be an equilibrium solution to the optimisation problem. This makes it incredibly difficult to add complications without disrupting the solution (hence why it has taken decades and a crisis to incorporate a financial sector). And ultimately every new innovation has to make further unrealistic assumptions to satisfy the end goal of finding a solution. For example in that BoE paper banks do not make any profit and there is no government debt. And the paper still uses exogenous shocks to model the crisis itself.
DeleteSFC and ABM models share that it is quite easy for even the most junior research to add in comprehensive financial sectors, different types of behavioural functions, and different sectors of the economy. This isn't the only reason I prefer them, but it is definitely a plus compared to the slow-moving mainstream macro literature, which seems to lag behind developments in both non-mainstream economics and the real world itself.
I agree with your second para, and its a point I've made many times. But it argues for a dual approach, with both non-microfounded aggregate models and DSGE.
DeleteLuis, you're just ignoring my point that heterodox models/thinkers quite clearly did anticipate the crisis, making ex ante predictions about key trends, relationships and timing. This clearly 'has some content' and was falsifiable - but it was not falsified, it was borne out.
DeleteIt may or may not be possible to rationalise anything in a heterodox model ex post, but this is not what heterodox economists did. This is why I favour these approaches over a backward-looking one which assumes crises are exogenous shocks and doesn't seem to have produced any novel insights into the crisis or the years before and after.
And you can't rationalise anything with words. You can't make the statement 'if trees are green, this implies my mum doesn't like cats' make sense.
Saying critics are 'begging the question' by assuming DSGE needs discarding is absurd. Heterodox economists clearly have a number of substantive objections to DSGE models and I won't bother repeating them. The only example of begging the question here is the assumption that DSGE is the necessary framework for macro, come what may. The real question is this: what would have to happen for you to abandon DSGE?
@Luis
Delete" you can rationalise everything with an argument constructed of words, you can rationalise everything with a mathematical model that does not try to build from microfoundations and impose whatever else characterizes DGSE, whatever else 'heterodox' economists have to offer is going to be capable of rationalising everything. Nobody would say a "fatal flaw" of using words is that they can ratiionalize everything. The point is that you have to *say something* that has some content, and if that thing is falsified then that thing you said is wrong. How is working within the framework known as DSGE different? "
It is important to remain clear about the distinction between a (DSGE) model and (DSGE as) a framework.
A model is solvable and has all the relevant details specified.
A framework is provides a structure but has lots of blanks left to be filled in.
However any framework will still impose certain definitions, transformations and restrictions including parametric restrictions and perhaps cross equation restrictions. The question is, where do these come from?
Broadly accepted frameworks are not created out of thin air. Simplifying a bit, the usual temporal sequence for the development of a widely accepted framework is this:
Preliminary hypotheses -> models -> repeated empirical validation -> theory -> framework based on the theory.
Of course even a preliminary hypothesis will have an associated framework, however, rationalising or interpreting data in terms of models built within such a framework which is not based on an empirically validated theory is not justified, and open to the Popperian criticism. Neither is the convergence of practitioners on a single such framework justified.
A major problem with the DSGE framework is precisely this: that it is not associated with an empirically validated theory, and many of the restrictions it imposes are quite arbitrary. This last point is what is emphasised in the Kocherlakota post I linked to above.
And one final point: the Popperian distinction is *not* about valid or invalid models or theories, but about scientific vs non-scientific theories. A theory can be scientific andd still wrong if it is falsified. A non-falsifiable theory in the Popperian sense is not wrong, it is not even wrong, ie it is non-scientific. The same holds for frameworks built from empirically non-validated theories.
"The point is that you have to *say something* that has some content, and if that thing is falsified then that thing you said is wrong."
Yes, agreed, but with the proviso that the content has to be meaningful in terms of real world observables, and not just internally in terms of the definitions used in the model.
@Luis
Delete"that's begging the question."
Only if you define 'mainstream macro' by the its techniques (DSGE framework). I am using a social defintion: 'mainstream macro' = the individuals and social context that makes up the mainstream.
Herman
Deleteif you meant individuals and social context not techniques/DSGE I find it hard to make sense of what you meant by "What is being asked for is a different theory"
Unlearning.
you have something in mind that you think good theory ought to be able to do. How do you know DSGE won't do it? I don't have strong views, but I do think you are being inconsistent. If you have not begged the question by assuming DSGE needs to be abandoned, why object to DSGE theorists continuing to work in that framework to see if it can account for whatever you think it needs to account for in ways that you find satisfactory? What on earth does it mean to think that the fact DSGE can rationalise financial crises as a "flaw" in DSGE?
I think you misunderstand me. I am a fan of ABM (I wanted my PhD to be ABM but was talked out of it) and think SFC can also be useful (I was introduced to Godley's stuff years before crisis, he was old friend of my in-laws), and also other work which doesn't really fall into the DGSE camp (e.g. work by Michaillat and Saez). The last thing I am is a DSGE fundamentalist.
I just don't think the alternatives are hands down winners over DSGE and haven't already concluded DSGE is a deadend. I just think you're kidding yourself if these other approaches are intrinsically any better at making falsifiable predictions etc. and also that I think you are mistaken if you think making good out of sample predictions that is even such a clean notion - every model inevitably misses so much out, and things don't pan out as model says, you can never say if that's because the mechanism in the model is wrong or because something the model excluded was responsible for what was observed. True for ABM, true for SFC. I am not ignoring fact/claim that Keen predicted crisis but his models in particular are no stronger than mainstream on methodological grounds or on being falsified/confirmed by data in the way critics demand from DSGE (Keen's model say debt build up=>crisis but all models can get one thing right like that whilst also falling down on other scores). Some models get some things right, but what does making accurate out of sample predictions even mean for a theorist working on financial-macro now? We come back to the most basic problem of economics, we cannot run experiments to test things, and we can never conclusively confirm reject models that knowingly only include a fraction of what goes on in the world from observational data.
So of course you have to think about what different models get right and what they get wrong and why, where the might be useful and where not etc. and I am suggesting anything daft like you never abandon particular models as useless - for example, I'd say rep agent models are rubbish for most of the questions macro is concerned with, but I have used them myself in settings where those aspects don't matter so much and I just needed the machinery to allow me to focus elsewhere.
I am forever writing that mainstream macro was never in the business of explaining where crises come from because it just assumed exog shocks, but I am not going to assume that it's impossible to say something useful about where shocks come from within the DSGE frameworks (or some other useful policy-relevant insight). That could mean endogenous deterministic shocks or it could mean showing how the financial sector is capable of getting itself into such a state that a small shock outside the sector itself is capable of knocking over the house of cards.
Also UE you seen to have confused "everything can be rationalised with words" with "every verbal rationalisation is true"
Deletesorry UE accusation of confusion above unfounded. Or maybe a different confusion. By 'rationalise anything' I did not mean it is possible to make sense of any old nonsense - DSGE cant' do that either.
DeleteLuis,
DeletePerhaps I was too curt, hence your confusion. I'll elaborate. I was responding to your question to UE
'...if you think that mainstream macro needs to learn from the crisis [and ? ] change, doesn't that mean you want to see "post hoc tinkering" '
What I meant was that instead of post hoc tinkering within the existing (DSGE) framework, the people who constitute the mainstream ought to encourage other approaches within the mainstream. After all, a new theoretical framework will not suddenly drop fully formed from the Heavens. Usually there has to be a fairly lengthy exploratory period.
So, for example, you wanted to work on ABM for your PhD but were talked out of it, presumably because it would have hurt your career prospects. What I am asking for is that this situation within the mainstream should change; that working on approaches like ABM should be encouraged and should not harm the career prospects of students or young academics,
I'll just give a final follow up with some bullet points:
Delete(1) My reason for disliking DSGE models is that they are inherently backward looking, since crises come from exogenous shocks. Exogenous shocks may be useful sometimes but it's not like you need DSGE to model the kind of network effects you are talking about - ABM in particular is well suited to this.
(2) Related but distinct, I see it as a plus for a framework if it predicts that X will happen because of reasons Y, versus a framework which does not predict that X happens but after X, says 'oh yes, Y is important' and tries to incorporate it.
(3) I'm actually not that bothered about fitting models to data at the moment, especially with questionable techniques like calibration and HP filters. It is better to be roughly right than precisely wrong, and this is where I think the non-DSGE approaches are getting more right and are a promising avenue of research. Don't forget the disparity in the amount of intellectual resources that have been poured into each approach - we should not expect the challenger to be completely developed, only to highlight a few key shortcomings/blind spots of the incumbent, which I think Keen, ABM and SFC all do very well.
(4) If you in principle are happy with including ABM and SFC, as well as potentially other approaches in curricula/research, then we don't disagree on that much.
Noah Smith actual exercises the very reasons mainstream economics has fallen down its own egotistical posterior!He proclaims to harshly that the Earth is flat and others haven't proved the opposite! mathematically anyway!
ReplyDeleteHe fails to grasp the gravity of this narrative!if you place a value on anything and everything it becomes a equation & then base your maths on any narrative within that sphere,then all the actions & reactions forced upon that whole equation is going to distort the narrative.
The modelling of the Earth being the centre or the Sun is a fundamental point,gravity solves most of these problems which led to the Sun narrative winning out (or a fundamental truth being established)unfortunately economics has been stuck in narratives for far to long,the IEA,ASI openly admit that they must stick to the narrative,one part of the equation that if focused on destroys the whole equation(in economics).make the narrative fit the facts,like Noah Smith dismissing evidence or other ideas that threatens the narrative of their ideology!
This is madness it threatens humanity,if your formulae doesn't add up,in reality it because other forces are at work,ignoring them is going to make them go away!Gravity doesn't actually account for all the actions in the universe,i can't tell you why!but i can tell you some other force is present(& therefor should & i believe are looking),So why don't economist realise that!& Look.
They may never find the solution,but please move us forward so that the next generation can,but am afraid that keeping the narrative is stifling real progress is to the benefit of their pay masters & not humanity? i have said before there is a iron curtain in economics & Noah Smith openly shows this to be true,yet worse than that doesn't even question his own narrative in the wake of a tsunami of alternatives because neoliberal free markets can't compete with free market alternatives!
How much of the data on debt and its leverage/gearing was made public from each bank before the 2008 crisis?
ReplyDeleteWasn't that information private?
Yes and that makes pricing impossible,its always been guess work, loaded in one sides fav,but for good economics such information is paramount to every other equation,since that is the ceiling set,bankers obviously work of it to set their pay,others can't & don't & the economic has and does suffer because of it!
DeleteSimple question: what methodology do you use to define which economics is "orthodox" (as opposed to heterodox)? Is "orthodox" simply what most economists do, or is it defined by what's taught in most undergrad programs, or is it defined by the type of paper accepted at AER and QJE?
ReplyDeleteAs an outsider, I tend to assume people like Krugman, Summers, Bernanke, Blanchard, Stiglitz, Rodrik, Akerlof and Romer are the orthodoxy, since they seem to be the important economists; but actually they're all pretty "heterodox" in attitudes, no? If those 8 guys are all heterodox, then doesn't "heterodox" just mean "smarter, more successful, and more famous than 99.9% of economics professors"?
Krugman in 2002 pointed out that a bubble was developing, and that it could be a sticky situation. http://www.nytimes.com/2002/08/16/opinion/mind-the-gap.html
ReplyDeleteOf course, he did not predict the severity of the down turn, but that had something to do with the action of Treasury and the Fed. Allowing Lehman to fail was a mistake that was not predictable by any model.
I remember the panic in my partner's voice when he heard the incredible news that Lehman was going to be allowed to fail. He and I spoke about the most likely scenario, and another great depression was not out of the realm of possibility.
Many thanks for this piece. You mention Godley and the existence of a sectoral balances model at HMT in the 1970s; did this model not exist because Godley himself had developed it during his stint at HMT in the 1960s under Kaldor (who was himself much influenced by the original progenitor of the concept, Kalecki)?
ReplyDeleteI have read much of Godley's articles and newspaper correspondence from the 1970s and 1980s, together with several of his books (the 'Monetary Economics' he co-wrote with Marc Lavoie was published at exactly the right time, but unfortunately did not appear to have the impact it warranted). I am constantly struck by his prescience, as well as by the lucidity and fluency of his writing. Look at his Levy Institute papers from the late 1990s and 2000s: he was on the money time and again, and well before much of the profession. His policy prescriptions, however, were still more disturbing. For instance, he was somewhat sceptical about the EU and advocated import substitution in order to re-balance the economy and prevent it from being so vulnerable to external shocks:
"First, imports should be non-selectively controlled by a high, uniform tariff or by auctioning import licences, thereby ensuring that the pattern of imports would continue to be determined by market forces. Second, I insist that control of overall import penetration, in sharp contrast with selective protectionism, must be an integral part of an expansionary fiscal and monetary programme. Once having removed the balance-of-payments constraint on growth, the Government is free to expand domestic demand within the only constraint that ought to be operative: our own capacity to produce. All and more of the yield of a tariff (or the proceeds of auctions of import licences) should be given back to consumers in the form of tax reductions so as to raise domestic spending. The level of imports would be as high as under present policies." (LRB, 24 Jan. 1980)
Needless to say, such prescriptions would sit very awkwardly with the UK's treaty commitments, but may perhaps provide some useful pointers as to how we might move forward from our current predicament.
He certainly warrants a serious intellectual biography, and the work he did (with Francis Cripps, Bob Rowthorn and others) at the Cambridge Economic Policy Group in the 1970s - until it was defunded - also merits a good dusting down.
"There is a second strand to the heterodox attack on mainstream economists concerning the financial crisis, and that invokes not Minsky but Wynne Godley. Here I think heterodox economists also have a point, but they tend not to put it very well and, perhaps as a result, it has not yet impacted on the mainstream."
ReplyDeleteDon't agree. MMT puts it very well once you've managed to slough off the oligarchy' Neo-Liberal mainstream brain-washing.
Good article Simon.
ReplyDeleteYou don't quite seem to get how liquidity preference affects the ratio between spending and saving propensities and how this ratio impacts macro, as well as how Minsky's analysis feeds into this.
The sectoral balances show together with functional finance show how to maintain the sectoral balances at actual full employment, given an MMT JG as buffer stock of employed and price anchor.
You should read Bill's recent series, "Modern Monetary Theory – what is new about it? " at billy blog and respond to it. I am just a random commentator.
Not true Simon as Bill Mitchell clearly points out in his latest 3 part blog that he will present at the conference in Kansas.
ReplyDeleteThe mainstream never had many of the things in Bills presentation. If you take time to read it Bill also highlights many things MMT and the mainstream agree on.
Read Bills presentation Simon.
Consider the thesis proposed by Quine in "Two Dogmas of Empiricism." In argument, contra his contemporaries (in the 1950s) was that "empirical tests" do not concern single statements, but the whole of human knowledge: "But what I am now urging is that even in taking the statement as unit we have drawn our grid too finely. The unit of empirical significance is the whole of science." (Quine, 1953)
ReplyDeleteIf we may assume for a minute classical logic, this view has important implications for how to deal with a conflict between theory and data. In this light, conflicting data leads to the negation of not a single statement, but of a conjunction of statement. This is equivalent to saying that at least one such statement is false -- but we never know which it is. A priori, all statements may be taken as either false or true and, thus, greatly many new sets of statements can reconcile our knowledge with this new data. To paraphrase Quine, what matters is the attainment of some sort of reflexive equilibrium.
Note that one provoking aspect of his thesis back then was that everything is up for grabs, even logic and mathematics. If need be, all formal languages may be revised and adapted to ensure that said reflexive equilibrium is attained. Moreover, it is always possible to spare some statements from revision, provided we concede sufficiently many modifications elsewhere when faced with conflicting data. With this in mind, we can approach the problem of how should one approach economic theory.
Do you think any Quinean would accept that microfounded models be the only approach academic macroeconomists use when working out their problems? In my opinion, we can choose scientific approaches just like we can choose the syntax of a formal language. It is true that a lot of economic theory is rooted in this optimization framework. As one of my professor says, agents (are modelled) as doing as well as they possibly can. However, this is problematic for macroeconomists who have to use some sort of general equilibrium model at the same time since it gets quickly messy. Simply out of convenience, why not impose a behavior instead of deriving a behavior out of imposed conditions? They could obviously deal with the problem by changing their whole approach (which is the "suggestion" made by heterodox economists), but that would neither be more justified or logical than are the small adjustments professor Wren-Lewis proposed here. We can decide to avoid microfoundations altogether, we can decide to only work through microfoundations, or we can use a greater plurality of approaches.
I find it very odd that people continue to be baffled by the apparently blindingly obvious; it may be that you and your colleagues are/were constrained to think in terms of a theoretical basis which excluded large numbers of factors which you really should have looked at, but that is not the same as contending that it was inherently difficult to spot.
ReplyDeleteIt was very easy to spot; it was the rerun of the 80s Savings and Loans debacle, built on the same mind boggling stupid assumptions which led to the S&Ls debacle in the first place, using the same financial instruments with a few more bells and whistles strapped on top, but the gameplan was identical to the 80s; create mortgages not based on any evaluation of whether the debtor could repay it, but so those mortgages could be stripped to create new instruments which the marketers thought would make them lots, and lots, of money.
Those traders were creating algorithms based on the premise that property prices never go down. On a smaller scale, I once had to explain to a guy who'd arrived on the red eye to protest my demand for higher capital, that his top dollar lawyers had managed to create a contract which gave his company unlimited down-side risk. They hadn't realised that propery prices could go down either.
I will concede that a professional lifetime mostly devoted to considering the taxation of financial institutions, and the taxation of complex financial instruments, made it easier for me to note the blindingly obvious which other people didn't, but it's not hard.
This isn't rocket science; anyone bothering to engage their brains could see it was ludicrous to believe that something which had already crashed and burned, at massive cost to the public, would, in some miraculous way, work this time around.
What you say makes as lot of sense to me. IMO the role of fraud needs to be given due consideration in this context. IIRC the Congressional investigation concluded that there was fraud on an industrial scale in the US markets. Sadly we shall never know for sure now as the authorities chose not to pursue the leads they were given by various brave whistleblowers - too much in thrall to the PTB on Wall Street perhaps?. Shame that the auditors did not spot it. Or did they and just keep quiet?
DeleteI am no expert on national accounting but assume that if banks balance sheets are showing grossly overvalued assets (e.g. worthless junk assessed as AAA assets) then this must flow through to national accounting and the data economists use? Then when you discover the size of the 'bezzle' (as Galbraith terms it) this must presumably kick a hole in national accounting figures?
Dear Prof. Wren-Lewis,
ReplyDeleteAllow me to start with this: I'm not qualified to either reject or accept your argument. Therefore -- although I'm far from being persuaded -- I shall not attempt to refute it.
Instead, what compels me to write is that I see in your post no clear definition of "heterodox". In other words, it's not clear to me who are those "heterodox" economists whose criticism you are replying to.
By the way, I'm not singling you out for this: similar observation applies to Noah Smith's recent Bloomberg article.
Jo Mitchell's article -- which you linked to -- does point to one such definition (by implication, I'll suppose you accept it). Unfortunately, the definition Mitchell stumbled upon is deeply flawed and -- frankly -- amateurish, advanced by who knows who, not by an academic heterodox economist. Missing from the scheme Mitchell presented, for instance, are feminist economics, environmental economics, Islamic economics, econophysics, agent-based models, computational economics and I might have left others out of this list.
Judging by Mitchell's scheme and by the three names you explicitly mentioned (i.e. Steve Keen, Hyman Minsky, and Wynne Godley) the criticisms you reply to(whatever their merits or lack thereof) come from the post Keynesian subset of heterodox economics.
Perhaps it would help if one started defining heterodox economics as heterodox economists -- as opposed to opinionated post Keynesian bloggers -- do:
Heterodox Economics Directory.
http://www.heterodoxnews.com/hed/
Sincerely yours
Anonymous
http://www.concertedaction.com/2016/08/29/simon-wren-lewis-on-wynne-godleys-models/
ReplyDeleteHi dont understand why mainstream economists are so dismissive of accounting the world economy is a system of balance sheets and income statements and cashflow statements. Can someone explain as it seems to make sense that all economic theory be tested versus the accounting for verfication of truth.
ReplyDeleteDear Prof. Wren-Lewis,
ReplyDelete"That is why financial sectors can be added to existing DSGE models"
For my benefit, could you please give an example of a paper where you think this is done in a good way.
Thank you.
A critique of long-standing of infinite horizon models with Walrasian economies in each period (a class that includes many DSGE models) is that due to the market clearing assumption in each period, there is no meaningful time dimension. That is, because time is assumed away in each period, infinitely repeating a model with no time dimension fails to capture the implications of time. An alternate way of understanding this critique is that market clearing means that DSGE models inherently fail to capture the liquidity issues that Minsky was concerned with, and your claim "financial sectors can be added to existing DSGE models: nothing has to be torn up and thrown away" is a little hard to square with this basic critique of the approach.
ReplyDeleteI would pose the following questions The Chart on UK Bank leverage:
ReplyDelete1 Is bank leverage the same as private credit creation - or, in what ways do the variables differ?
2 Despite the process of financial and capital market liberalisation that began in the late seventies and which accelerated under Thatcher/Reagan administrations in the eighties, ratio was lower in eighties and nineties than it was pre-1965, with seventies marked as a decade of low leverage; why was this?
3 Explosion in leverage took place in 2002-8 period; was this propelled by growth in real estate lending or by derivatives, or a mixture of both, with failure in regulation in terms of inadequate capital reserves another factor?
4 Has anyone any idea what is the optimum level of bank leverage, or is what is really important is for what purpose bank assets are lent as liabilities?
" which is why there has been an explosion of DSGE and other microfounded analysis putting a financial sector into macromodels"
ReplyDeleteBefore that it was unsuspicious to ignore 14% of GDP, or 8% or 4% of GDP as financial sector was 100 years ago!!!!
Enough said!
I am going to repeat myself but I feel the point is important... can 'mainstream economists' please advocate for stronger auto stabilisers? That seems to be what is 'the difference' or 'missing.'
ReplyDeleteYou seem Simon to have a real cognitive dissonance blind spot about them - because of course they inject government money automatically into a system on demand, but they do it via the fiscal authority not the monetary one (which just plays a passive role of clearing the cheques) and only to the poor and dispossessed rather than free money to the wealthy.
Eg higher interest rates = paying free money to rich people that will 'trickle down.'
It is the logical route rather than trying to increase the power of the central bank.
Here in MMT land we advocate for the JG which is a *very* strong auto stabiliser. If the living wage was set at £10 per hour and you worked 37.5 hours a week you would get a gross wage of £375 per week. That’s five times the current rate of Job Seekers Allowance of £73.10 per week. When you start a Job Guarantee the first thing you do is pay people the wage while you ramp up the job side. That money given to people will in the short term bring the effective demand of the economy up to speed causing the economy to hire any remaining skills off the pile – reducing the number of people on the Job Guarantee.
Only when that effect subsides do you then look for and create jobs *that match the people* on the Job Guarantee. That’s the key difference of JG that solves the matching problem. Find people something to do, not come up with something to do and find the people.
There are several important effects (ii and iii in particular):
(i) People can choose to go onto social security via the JG. This disciplines the standard economy. All of a sudden ‘no deal’ is an option in the normal business jobs market and that makes the job market behave, well, like a market.
(ii) Because they are working, the number of people on a JG becomes less of a social issue – no more ‘bring down unemployment’, no more ‘shirkers’. Therefore normal businesses can be allowed to go bust, not pay redundancy, etc because the JG will catch people who lose their jobs during a retrenchment. That disciplines the spending and wage channels since there need be no bailouts or the ‘special industries’ that pump-priming requires. Overpaid workers get an imposed wage cut when they are forced to move to the JG as do greedy bosses. ‘Corporate confidence’ is no longer of overriding concern.
(iii) People on the JG are working and producing output – so they are more socially productive than on unemployment benefit or income guarantees. In addition they have something to do with their day, so they are unlikely to be isolated or be exploited by extremists. And because they are seen to be working they become *cheaper to hire and more productive* from a normal business’s point of view (there is always less hiring risk if you know people are working.) That eliminates a current risk cost completely from the economy (the ‘long term unemployed’ issue.)
(iv) Forcing businesses to compete for staff should accelerate the capital development of the economy, and replacing jobs with better machines is what we want the private sector to do. People need to be expensive to use and valued, and jobs in the normal business jobs market must not be sacrosanct. Business models that fail, must be allowed to fail without any sentimentality. We need to ensure that businesses in a capitalist economy are treated like cattle, not pets.
This may simply follow from things already said, but I do not see putting financial frictions into DSGE models as being at all satisfying the complaints by those citing Minsky (who, btw, was not around for the crisis of 2008 and did not forecast it as some others of us did). The problem, which also relates to Simon's critique of the Godley-Lavoie models, involves the nature of the microfoundations, which he is very keen must still be there and don3 very properly. The real problem is that these oh so proper microfoundations for DSGE models assume rational expectations (and often also, although not always, representative agents). We know ratex is simply false, so having behaviorally accurate foundations would be superior.
ReplyDeleteThis is probably the biggest problem with this post in my not-so humble view.
Barkley Rosser
From where I sit, the problem macro-economists have is fundamental laziness.
ReplyDeleteModel: Empirics + simplying assumptions = Expected Result
When reality not equal Expected Result, then simplifying assumptions are wrong --- the economist's job is to then find out which assumption(s) are wrong and then figure the basis information required to change that assumption into empirical data and the relationships that govern them.
The excuse is always that it's too complex or the required data isn't available. The latter is now the excuse for not having known about the complex derivatives market and leverage.... since it was "proprietary" to banks or "private". But if a couple of guys could discover well before the fact that the market was screwed up and short the mortgage market, then so could macro-economists have figured it and included it in the models before the fact.
It was all empirically knowable and quantifiable information. It was a distribution of existing capital + leverage terms. So why didn't macro-economists make known that this information was a critical part of macro modeling and that if the data wasn't available then seek to get it made available because it was critical data to include in the models.
But rather than do that , they made simplifying assumptions because, from where I sit it was "easier" and that means the macro profession was just lazy. I mean it's not like this kind of stuff hasn't happened before in economics... so what excuse for not making the effort to include it?
Neoclassicals adding the financial system is like Ptolemaics adding another epicycle: it doesn't flow from the logic of the framework, it has to be imposed ad hoc after the crisis of the century hits you in the face. To me this is the indictment. You can stitch-fix a model or two, but it doesn't address that the framework is dysfunctional. Neoclassicals still don't know *why* the financial system matters. Vide Krugman babbling on how banks intermediate between savers and borrowers and his Eggertson paper he touts modeling that nonsense - accounting 101 would show him that banks create money ex nihilo but apparently he never took it.
ReplyDeleteNeoclassicals adding the financial system is like Ptolemaics adding another epicycle: it doesn't flow from the logic of the framework, it has to be imposed ad hoc after the crisis of the century hits you in the face. To me this is the indictment. You can stitch-fix a model or two, but it doesn't address that the framework is dysfunctional. Neoclassicals still don't know *why* the financial system matters. Vide Krugman babbling on how banks intermediate between savers and borrowers and his Eggertson paper he touts modeling that nonsense - accounting 101 would show him that banks create money ex nihilo but apparently he never took it.
ReplyDeleteThe problem with macro-economic models is that we're very, very bad at predicting the behaviour of large groups of people. We can make some assumptions that hold true when nothing interesting is happening, but we are singularly unable to explain or capture any of the interesting dynamics.
ReplyDelete