This is
a follow on to this post,
and an earlier post
by Paul Krugman. I’m currently reading an excellent account by Jonathan
Heathcote et al of “Quantitative Macroeconomics with Heterogeneous Households”.
This is the growing branch of mainstream macro that uses today’s computer power
to examine the behaviour of systems with considerable diversity, as opposed to a
single (or small number of) representative agent(s). (Heterodox economists may
also be
interested!) I want to talk about the methodological implications of this
kind of analysis at some future date, but for now I want to take from it another
example of letting theory define reality.
If you
have an environment where a distribution of agents differ in the income
(productivity) shocks they receive, a key question is how complete markets are.
If markets are complete, agents can effectively insure themselves against these
risks, and so aggregate behaviour can become independent of distribution. This is
a standard microfoundations device in models where you want to examine
diversity in one area, like price setting, but want to avoid it spilling over
into other areas, like consumption. (As the paper notes, the representative
agent that emerges may not look like any of the individual agents, which is one
of the points I want to explore later.)
Real
world markets are not complete in this sense. We know some of the reasons for
this, but not all. So the paper gives two different modelling strategies, which
it describes in a rather nice way. The first strategy – which the paper mainly
focuses on - is to ‘model what you can see’:
“to simply model the markets, institutions, and arrangements
that are observed in actual economies.”
The paper describes the main drawback of this approach as
not being able to explain why this incompleteness occurs. The
second approach is to ‘model what you can microfound’:
“that the scope for risk sharing should be derived
endogenously, subject to the deep frictions that prevent full insurance.”
The advantage of this second approach is that it reduces the
chances of Lucas critique type mistakes, where policy actions change the extent
of private insurance. The disadvantage is that these models “often imply
substantial state-contingent transfers between agents for which there is no
obvious empirical counterpart”. In simpler English, they predict much more
insurance than actually exists.
The
first approach is what I have described in a paper
as the ‘microfoundations pragmatist’ position: be prepared to make some ‘ad
hoc’ assumptions to match reality within the context of an otherwise
microfounded model. I also talk about this here.
The second approach is what I have called the ‘microfoundations purist’
position. Any departure from complete microfoundations risks internal
inconsistency, which leads to errors like (but not limited to) the kind Lucas
described.
As an
intellectual exercise, the ‘model what you can microfound’ approach can be
informative. Hopefully it is also a stepping stone on the way to being able to
explain what you see. However to argue that it is the only ‘proper’ way to do
academic macroeconomics seems absurd. One of the key arguments of my paper was
that this ‘purist’ position only appeared tenable because of modelling tricks
(like Calvo contracts) that appeared to preserve internal consistency, but
where in fact this consistency could not be established formally.
If you
think that only ‘modelling what you can microfound’ is so obviously wrong that
it cannot possibly be defended, you obviously have never had a referee’s report
which rejected your paper because one of your modelling choices had ‘no clear
microfoundations’. One of the most depressing conversations I have is with
bright young macroeconomists who say they would love to explore some
interesting real world phenomenon, but will not do so because its
microfoundations are unclear. We need to convince more macroeconomists that modelling choices can be based on what you can see, and not
just on what you can microfound.
he first two lines of Heathcote's paper are, to be polite bullshit. Here's the second:
ReplyDelete"Until the 1970s, the field of macroeconomics concentrated on estimating systems of ad hoc aggregate relationships ('Cowles macroeconometrics') and largely abstracted from individual behavior and differences across economic agents."
Having read (1) the General Theory - where the restless equilibrium in the stock market is explained by a momentary balance of bulls and bears - and (2) much work in the Kahn-Robinson-Kaldor theory of income distribution, I don't know why I should take Heathcote's paper seriously.
From the above post:
"If you think that only ‘modelling what you can microfound’ is so obviously wrong that it cannot possibly be defended, you obviously have never had a referee’s report which rejected your paper because one of your modelling choices had ‘no clear microfoundations’. "
This is a straw-person several times over. Who asserts that referees do not defend what some think is rationally indefensible? On the other hand, the fact that referees sometimes reject papers for that reason has no bearing, for the non-authoritarian, on the belief that "only ‘modelling what you can microfound’ is so obviously wrong that it cannot possibly be [plausibly, reasonably] defended". On the third hand, many critics object that the mainstream is internally inconsistent when they pretend that, for example, the assumption of rational expectations is microfounded.
(If you cannot explain why an equilibrium with rational expectations would arise, in general, among agents in disequilibrium, you do not have microfoundations. It is no answer to say that you know your models are parables and actual human experience is irrelevant.)
I object to mainstream economics partly because of the causal knavery that is so pervasive within it.
Perhaps you are not considering that any model is a supersimplification of reality. Which means that you are meant for you to start your work from a firm logical ground. I mean, if you don't have some kind of theorical ground how can you say that assumptions that fits well on one set of data are a valid learning instrument at all?
DeleteI agree with Prof. Wren-Lewis when he says that you must start with microfundations (internal consistency) and only then proceed to the real world problems. In my view, this is not ideal but to define a body of knowledge without it is almost impossible (imagine the field Physics without mathematical rigour and models ....).
The problem is the arrogance of thinking that this tool called microfundations is the actual knowledge instead of a simple and useful learning tool in constant development.
I believe it is as childish to expect models to be true as to expect them to be a lot of bulshit. Off course it is just my personal opinion and there is a lot of very inteligent people that seems to disagree with me ....
I do not understand your Physics point. As a physicist I am fully aware that phenomenological models (Macro) operate at a different level to quantum-mechanics (micro). The area in between, mesoscopic physics (which is where I did my work) is the intriguing area, because it combines macro-properties and micro-properties (meaning macro measurable quantummechanic phenomena. Think superconductivity etc.).
DeleteIt is entirely possible to create models in that area, that are based on the often very different macro- and micro-world that have something to say about observable phenomena.
By Eduardo's view, physics wouldn't have studied anything until it achieved 20th-century levels of understanding of subatomic physics (how that would have happened is presumably explicable by microfoundation purists).
DeleteAnonymous, I am not a Physiscist. What I meant is that you have to have some kind of hard core science (such as Newtonian Physics in the century XIX) in order to be able to develop your work. Without that your science would be constantly starting over (think of the Douglas Adam's image of god in the Hitchhikers Guide of the Galaxy). You can even leave with at the same time with two bodies of knowledge, like in Physics. It is not a problem.
DeleteWhat I meant in my comparation with physics is that a physiscist (as a friend explained) is permanently looking to the data and working in the assimetries she finds in the results. This is the base, according to her, to changing the hard core of physics (e.g. the emergence subatimic physics).
Still according to her, in the field of physics people will have to live with those assimetries, even incorporating them to their models until the process is understood enough to be defined as part of what she called, I don't realy remember, "stadart model"(?) and inserted oficialy in every textbook of the field.
In my view the field of macro lacks this posture. People tend to be pro or against "microfundations" while I see it as a reflection of things that in certain conditions can provenly work.
But modeling is a dynamic process and we don't have perfect knowledge of reality. Which means that we, just like a Physiscist, have to live with the idiossicrasies of our field.
I think microfudations must be seen as a starting point, something that can be falsified by data at any time. Seeing those microfundations as true instead of useful tool is just childish, I am sorry not to have a better word to say it.
In the other hand, it sounds absurd in my ears to say that we should through that away. Why not saying instead that we have a harcore body of theory (which is m icrofunded), and a softcore bpdy of theory that consists of the hardcore body plus the not fully understood results that are extremely useful besides not adequatedly microfunded.
I think that when a referee report rejects your paper because "one of your modeling choices has no clear microfundations" he can be both right and wrong at the same time. Perhaps he is saying that economics is what it is and that if you cannot live with that you should go to work in another field. It is your problem, not his. In the other hand, perhaps he is not such an idiot and is saying something a litle bit more subtle.
What if he is saying that his publication is focused un developments in the hard core body of the field and that its softcore should be developed somewhere else?
Again, I don't think the problem is microfundamentation of macro-models. The problem is the arrogance of thinking of this as truth! What do you think?
If one wanted to ground something on data in such a setting, a Bayesian framework [1] is useful. You would start with a prior on the distribution of agent parameters, which can be then conditioned on the evidence. However, a classical model selection problem arises since in the end, the 'correct' agent models may lie outside the class of models considered.
ReplyDelete[1] Not in the sense that the agents are Bayesian (they do not have to be), but that the inference about the model parameters is.
If you think that only ‘modelling what you can microfound’ is so obviously wrong that it cannot possibly be defended, you obviously have never had a referee’s report which rejected your paper because one of your modelling choices had ‘no clear microfoundations’. One of the most depressing conversations I have is with bright young macroeconomists who say they would love to explore some interesting real world phenomenon, but will not do so because its microfoundations are unclear.
ReplyDeleteThanks for saying this. I myself have no expectation of ever publishing in a top journal; if I can finish my PhD and get a job at a four-year school -- even a decent community college -- that'll be good enough for me. But I see lots of more professionally optimistic people forced to trim their intellectual ambitions in just this way. I was talking to a friend just the other day with a job at a top-50 department, who was complaining about how he used to spend his time thinking about how the economy worked and now just thinks about identification strategies.
Unlike the first anonymous, I also agree with the characterization of postwar Keynesian economics as focusing on aggregate relationships. It's true, as he/she says, that Keynes had more sophisticated (tho mostly unformalized) behavioral stories, but postwar "Keynesianism" was not Keynes. That said, I'd certainly prefer the 1970-era orthodoxy to today's.
I dropped out of graduate school in economis in the early nineties because of this. I could do the maths and stats easily enough (i have a math major). But it seemed so pointless because I could not relate it to anything interesting in the real world. The topics people were working on were so bland that I decided to quit. I hope voices such as yours are heeded. I think the world economy is paying for the problems with the profession.
ReplyDeleteReally, all this post says to me is economists need to read john holmwood's (& A Stewart's) explanation and social theory. That or talk to the man himself. Is funny/sad how unreflective a discipline economics is
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteI see no mention of the pioneer of the "microfoundations" of macro, Edmund Phelps. His most seminal work involved imperfect information, incomplete knowledge and wage/price expectations in support of a macroeconomic theory of employment and wage/price dynamics. This led to the development of the natural rate of unemployment and the beginning of rational expectations.
ReplyDeleteTo me, all of this stuff is just "warmed-over" Classical (Pre-Keynesian) theory. Ultimately, they all come to the same conclusion: any economy, when left alone, will always tend towards the natural rate (and, of course, you cannot fool all of the people all of the time). In the face of what has happened in 2008 (and 1929), it is a wonder how any of this survived. How would "microfoundations" explain the purely macro-concept of the fallacy of composition? I would love to know the answer to that.
In addition to the top tier economic journals not accepting articles without "microfoundations," they also frown upon empirical studies that do not use panel data.