Friday, 23 August 2013

New Keynesian models and the labour market

Do all those using New Keynesian models have to believe everything in those models? To answer this question, you have to know the history of macroeconomic thought. I think the answer is also relevant to another frequently asked question, which is what the difference is between a ‘New Keynesian’ and an ‘Old Keynesian’?

You cannot understand macro today without going back to the New Classical revolution of the 1970s/80s. I often say that the war between traditional macro (Keynesian or Monetarist) and New Classical macro was won and lost on the battlefield of rational expectations. This was not just because rational expectations was such an innovative and refreshing idea, but also because the main weapon in the traditionalists armoury was so vulnerable to it. Take Friedman’s version of the Phillips curve, and replace adaptive expectations by rational expectations, and the traditional mainstream Keynesian story just fell apart. It really was no contest. (See Roger Farmer here, for example.)

I believe that revolution, and the microfoundations programme that lay behind it, brought huge improvements to macro. But it also led to a near death experience for Keynesian economics. I think it is fairly clear that this was one of the objectives of the revolution, and the winners of wars get to rewrite the rules. So getting Keynesian ideas back into macro was a slow process of attrition. The New Classical view was not overthrown but amended. New Keynesian models were RBC models plus sticky prices (and occasionally sticky wages), where stickiness was now microfounded (sort of). Yet from the New Classical point of view, New Keynesian analysis was not a fundamental threat to the revolution. It built upon their analysis, and could be easily dismissed with an assertion about price flexibility. Specifically NK models retained the labour leisure choice, which was at the heart of RBC analysis. Monetary policymakers were doing the Keynesian thing anyway, so little was being conceded in terms of policy. [1]

So labour supply choice and labour market clearing became part of the core New Keynesian model. Is this because all those who use New Keynesian models believe it is a good approximation to what happens in business cycles? I doubt it very much. However for many purposes allowing perfect labour markets does not matter too much. Sticky prices give you a distortion that monetary policy can attempt to negate by stabilising the business cycle. The position you are trying to stabilise towards is the outcome of an RBC model (natural levels), but in many cases that involves the same sort of stabilisation that would be familiar to more traditional Keynesians.

This is not to suggest that New Keynesians are closet traditionalists. Speaking for myself, I am much happier using rational expectations than anything adaptive, and I find it very difficult to think about consumption decisions without starting with an intertemporally optimising consumer. I also think Old Keynesians could be very confused about the relationship between aggregate supply and demand, whereas I find the New Keynesian approach both coherent and intuitive. However, the idea that labour markets clear in a recession is another matter. It is so obviously wrong (again, see Roger Farmer). So why did New Keynesian analysis not quickly abandon the labour market clearing assumption?

Part of the answer is the standard one: it is a useful simplifying assumption which does not give us misleading answers for some questions. However the reason for my initial excursion into macro history is because I think there was, and still is, another answer. If you want to stay within the mainstream, the less you raise the hackles of those who won the great macro war, the more chance you have of getting your paper published.

There are of course a number of standard ways of complicating the labour market in the baseline New Keynesian model. We can make the labour market imperfectly competitive, which allows involuntary unemployment to exist. We can assume wages are sticky, of course. We can add matching. But I would argue that none of these on its own gets close to realistically modelling unemployment in business cycles. In a recession, I doubt very much if unemployment would disappear if the unemployed spent an infinite amount of time searching. (I have always seen programmes designed to give job search assistance to the unemployed as trying to reduce the scaring effects of long term unemployment, rather than as a way of reducing aggregate unemployment in a recession.) To capture unemployment in the business cycle, we need rationing, as Pascal Michaillat argues here (AER article here). This is not an alternative to these other imperfections: to ‘support’ rationing we need some real wage rigidity, and Michaillat’s model incorporates matching. [2]

I think a rationing model of this type is what many ‘Old Keynesians’ had in mind when thinking about unemployment during a business cycle. If this is true, then in this particular sense I am much more of an Old Keynesian than a New Keynesian. The interesting question then becomes when this matters. When does a rationed labour market make a significant difference? I have two suggestions, one tentative and one less so. I am sure there are others.

The tentative suggestion concerns asymmetries. In the baseline NK model, booms are just the opposite of downturns - there is no fundamental asymmetry. Yet traditional measurement of business cycles, with talk of ‘productive potential’ and ‘capacity’, are implicitly based on a rather different conception of the cycle. A recent paper (Vox) by Antonio Fatás and Ilian Mihov takes a similar approach. (See also Paul Krugman here.) Now there is in fact an asymmetry implicit in the NK model: although imperfect competition means that firms may find it profitable to raise production and keep prices unchanged following ‘small’ increases in demand, at some point additional production is likely to become unprofitable. There is no equivalent point with falling demand. However that potential asymmetry is normally ignored. I suspect that a model of unemployment based on rationing will produce asymmetries which cannot be ignored.

The other area where modelling unemployment matters concerns welfare. As I have noted before, Woodford type derivations of social welfare give a low weight to the output gap relative to inflation. This is because the costs of working a bit less than the efficient level are small: what we lose in output we almost gain back in additional leisure. If we have unemployment because of rationing, those costs will rise just because of convexity. [3]

However I think there is a more subtle reason why models that treat cyclical unemployment as rationing should be more prevalent. It will allow New Keynesian economists to say that this is what they would ideally model, even when for reasons of tractability they can get away with simpler models where the labour market clears. Once you recognise that periods of rationing in the labour market are fairly common because economic downturns are common, and that to be on the wrong end of that rationing is very costly, you can see more clearly why the labour contract between a worker and a firm itself involves important asymmetries - asymmetries that firms would be tempted to exploit during a recession. 

Yet you have to ask, if I am right that this way of modelling unemployment is both more realistic and implicit in quite traditional ways of thinking, why is it so rare in the literature? Are we still in a situation where departures from the RBC paradigm have to be limited and non-threatening to the victors of the New Classical revolution?

[1] When, in a liquidity trap, macroeconomists started using these very same models to show that fiscal policy might be effective as a replacement for monetary policy, the response was very different. Countercyclical fiscal policy was something that New Classical economists had thought they had killed off for good.

[2] Some technical remarks.

(a) Indivisibility of labour, reflecting the observation (e.g. Shimer, 2010) that hours per worker are quite acyclical, has been used in RBC models: early examples include Hansen (1985) and Hansen and Wright (1992). Michaillat also assumes constant labour force participation, so the labour supply curve is vertical, and critically some real wage rigidity and diminishing returns.

(b) Consider a deterioration in technology. With flexible wages, we would get no rationing, because real wages would fall until all labour was employed. What if real wages were fixed? If we have constant returns to labour, then if anyone is employed, everyone would be employed, because hiring more workers is always profitable (mpl>w always). What Michaillat does is to allow diminishing returns (and a degree of wage flexibility): some workers will be employed, but after a point hiring becomes unprofitable, so rationing can occur.  

(c) Michaillat adds matching frictions to the model, so as productivity improves, rationing unemployment declines but frictional unemployment increases (as matches become more difficult). Michaillat’s model is not New Keynesian, as there is no price rigidity, but there is no reason why price rigidity could not be added. Blanchard and Gali (2010) is a NK model with matching frictions, but constant returns rules out rationing.

[3] I do not think they will rise enough, because in the standard formulation the unemployed are still ‘enjoying’ their additional leisure. One day macroeconomists will feel able to note that in reality most view the cost of being unemployed as far greater than its pecuniary cost less any benefit they get from their additional leisure time. This may be a result of a rational anticipation of future personal costs (e.g. here or here), or a more ‘behavioural’ status issue, but the evidence that it is there is undeniable. And please do not tell me that microfounding this unhappiness is hard - why should macro be the only place where behavioural economics is not allowed to enter!? (There is a literature on using wellbeing data to make valuations.) Once we have got this bit of reality (back?) into macro, it should be much more difficult for policymakers to give up on the unemployed.

References (some with links in the main text)

Olivier Blanchard & Jordi Galí (2010), Labor Markets and Monetary Policy: A New Keynesian Model with Unemployment,  American Economic Journal: Macroeconomics, vol. 2(2), pages 1-30

Hansen, Gary D (1985) “Indivisible Labour and the Business Cycle” Journal of Monetary Economics 16, 309-327

Hansen, Gary D and Wright, Randall (1992) “The Labour Market in Real Business Cycle Theory” Federal Reserve Bank of Minneapolis Quarterly Review 16, 2-12.

Pascal Michaillat (2012), Do Matching Frictions Explain Unemployment? Not in Bad Times, American Economic Review, vol. 102(4), pages 1721-50.

Shimer, R. ‘Labor Markets and Business Cycles’, Princeton, NJ: Princeton University Press, 2010.




16 comments:

  1. Very interesting post. It has made me consider that if one uses a model where labor markets clear, there can't be much room for hysteresis. I have been trying to wrap my mind around an economy and can't escape the idea that hysteresis is very clearly present in the sensitivity of inflation to levels of unemployment.

    ReplyDelete
    Replies
    1. If you haven't already read it, I would recommend the book "The Death of Economics" by Paul Ormerod. It contains some interesting analysis on attractor points and hysteresis in the Phillips Curve and other economic theories.

      Delete
  2. Prof Simon,

    Re the attractions that “rational expectations” and the “intertemporally optimising consumer” holds for you, how much attention do you pay to the real world, i.e. the empirical evidence?

    The two studies (links below) showed that people spend about half what they got from the 2001 Bush tax rebates within six months. That doesn’t sound to me like “intertemporally optimising”.

    http://www.kellogg.northwestern.edu/faculty/parker/htm/research/johnsonparkersouleles2005.pdf

    http://finance.wharton.upenn.edu/~rlwctr/papers/0801.pdf

    ReplyDelete
    Replies
    1. Ralph

      I was careful to write 'starting' and put it in italics. I have written a number of posts that discuss how the intertemporal model might be extended to take into account just the kind of evidence you quote e.g. http://mainlymacro.blogspot.co.uk/2012/07/consumption-and-complexity-limits-to.html

      As to why I would start from the intertemporal model. Well that came from the real world as well - UK consumption in the 1980s. I discuss that here: http://mainlymacro.blogspot.co.uk/2012/02/what-have-keynesians-learnt-since.html

      Delete
    2. I see that you wrote 12 February 2012 'What have Keynesians learnt since Keynes?' that:

      "House prices. The consumption boom coincided with a housing boom. Were consumers spending more because they felt wealthier, or was some third factor causing both booms? There was much macro econometric work at the time trying to sort this out, but with little success. Yet thinking about an intertemporal consumer leads one to question why consumers in aggregate would spend more when house prices rise."

      Akerlof and Shiller (2009) disagree, as they say that expectations of higher asset price rises, known as price-to-price feedback, can interact with the real economy, so that when stock and house prices rise, people have less reason to save, so they spend more as they feel wealthier, sometimes seeing stock and home appreciation as part of their current savings. Banks lend more in the upswing to home buyers as a fraction of the value of their home rises, which feeds back into asset prices, and encourages more bank leverage. This rising bank lending on the upswing, as asset prices rise, causes bank lending relative to their regulatory requirements to rise, so they people then may buy more assets. And banks are in competition, so respond and free up more capital.

      Akerlof and Shiller then looked at Real US GDP 1929-2007, which grew at an average of 3.4% annually, stocks over the same period grew 7% annually, (2% being capital gains, 5% dividends), US agricultural land grew 0.9% a year 1900-2000, but real home prices 1900-2000 grew 0.2% a year.

      I don't see the rationality here in UK and US homebuyers, in aggregate, but I do see some irrational assumptions that led to the pyramid scheme, and an awful lot of economists who thought the irrational were rational consumers.

      In 'WHAT HAVE THEY BEEN THINKING? HOME BUYER BEHAVIOR IN HOT AND COLD MARKETS' Karl E. Case, Robert J. Shiller and Anne Thompson, September 2012, pages 20-24 there is a précis of US reporting on the housing bubble. So many publications said there was a home price bubble but didn't reach the solid conclusion that it needed stopping - a similar thing is found on the BBC website if you type housing bubble into their search engine,: at the same time the BBC was making, and still is, estate agent fronted programmes on house selling.

      I can only say that the rational right hand doesn't know what the irrational left hand is doing.

      Delete
    3. If consumers expect a recession, then they get a recession. If they expect a boom then they get a boom. This is not rational expectations, it is causality. Ergo - rational expectations is utter nonsense. It merely codifies all that is wrong with macro, namely that the future can be predicted but cannot in the long-run be changed. In reality the reverse is true.

      Rational expectations is used to justify ideas like the Policy Ineffectiveness Proposition which postulates that any policy intervention will be counteracted by the "rational expectations" of agents. e.g. current borrowing or tax cuts will be paid for with future tax rises, so tax cuts won't boost spending as the recipients will merely save the windfall to pay for the higher taxes. This proposition is therefore used to justify laissez-faire neo-liberal free-market non-interventionism.

      The reality is that consumers are not rational. The model that probably best fits their behaviour is Milton Friedman's "Permanent Income Hypothesis", except that their view of their permanent income is determined by a relatively short historical time-line (3-5 years). Consequently they expect their current economic situation to be the new "normal". So they extrapolate future income and behaviour on that basis. That is why you get consumer booms and house-price booms. They really do believe that these things can last forever, and so too over the period of the Great Moderation did most economists and politicians.

      Delete
  3. The reason that most macro models time and time again fail to predict economic behaviour is simply that they employ inadequate modelling techniques. In particular, as any systems engineer will tell you, they fail because do not model correctly the time behaviour of the key individual economic parameters. Once this inadequacy starts to be addressed, as has been done for example in the macro model demonstrated on website www.economyuk.co.uk , everything starts to fall into place and the debate as to which is the best theoretical approach to adopt, e.g. New Keynesian v Traditional Keynesian, can be resolved, indeed is resolved, on a more sound and rational basis.

    ReplyDelete
  4. Simon,
    the market clearing labour market model is used by many New Keynesian economists because if you don't interpret it literally (a common fault of many critics) as being about the hours choice of a single always employed representative households, it provides a 1st order approximation to the behaviour of more sophisticated models where hours adjustment is mainly along the extensive margin. Many of the insights from sticky wage and price models can be captured by assuming a higher labour supply elasticity in a market clearing labour market model. Does this mean we're adopting a relaxed attitude to microfoundations when doing this, as you yourself have advocated? Absolutely! Ideally you should always have a labour market model that simultaneously captures job indivisibility (i.e limited ability to adjust hours or effort on a job),matching frictions and efficiency wage considerations perhaps. But do you really want to deal with all the extra complexity if your focus is on the more general effects of nominal price stickiness? Or perhaps you prefer to work with a simpler labour supply/demand framework in which labour demand is too low because of sticky prices and other inefficiencies (e.g higher costs of financing payroll in a financial crisis, more illiquid durable goods market when financially constrained consumers don't search as intensively for new cars or houses etc...).
    Whether you choose to call the suboptimal level of employment in these models unemployment, or non employment or underemployment is often a matter of semantics if you adopt the usual macro focus on aggregate variables (at the micro level, if I'm trained as an engineer but labour market clearing at lower employment level means I'm forced to take on a job as a cashier on the weekend shift maybe I'm employed but am I really much happier?). As usual, welfare conclusions from any macro model with representative are to be taken with a very big extra grain of salt. I personally think they should really only be used to analyse various scenarios/conditional forecasts for aggregate variables, not for welfare analysis.

    ReplyDelete
    Replies
    1. I agree, and this is certainly how I interpret my own use of NK models with clearing labour markets. However the reason I wrote this
      "It will allow New Keynesian economists to say that this is what they would ideally model, even when for reasons of tractability they can get away with simpler models where the labour market clears."
      is because while you and I may have this in our heads, it may be less clear to others. I have had conversations recently with macroeconomists who do take NK welfare results seriously. I also find it odd that researchers whose focus is on labour markets, and appear to be trying to match reality, have not allowed for rationing i.e. why Michaillat's paper is a novelty.

      Delete
    2. But is not then this just a case of "modelling what is easy to model rather than what needs to be modelled"? Which is something that you have robustly and correctly criticised elsewhere.

      Delete
  5. I looked at Michaillat's paper and I'm not sure I get his point, or equivalently I think he's making too much of a big deal of they way he's decomposing unemployment into matching frictions versus rationing. First, it's misleading to say that there would be no unemployment in the basic Diamond Mortensen Pissarides (DMP) search model if only unemployed would search much more. In the baseline DMP model there's no search intensity margin by the unemployed. This could reflect a rapidly increasing marginal disutility of search, or a rapidly dimishing return to extra search: same thing. The point is precisely that there are limits to what more search by the unemployed can achieve. Second, failure to match in a search models can always be interpreted as either due to mismatch or due to rationing. Mathematically, the 2 are impossible to distinguish and I doubt empirically you can really separate the 2: the employer can always say they didn't hire you for a job because you weren't actually a good fit for it on one of the many idiosyncratic job characteristics, and the advertised wage just meant that's the wage they would to whoever was a good match for the company's needs (and in a recession, the returns to hiring anyone are lower, so it's more likely any given applicant isn't a good fit for the job). Or it could be that you selected into a subset of the job market which specifically offered higher wages conditional on employment , but with a higher probability of not finding a job. Both of these interpretations are compatible with the same data, and may or may not be associated with inefficient wage setting. But it could be compatible with efficient wage determination, for example see this paper
    http://e-archivo.uc3m.es/bitstream/10016/9844/1/we1039.pdf

    ReplyDelete
  6. Simon,
    "We can make the labour market imperfectly competitive, which allows involuntary unemployment to exist."

    Wasn't the main argument of Keynes simply that, even if labour markets were fully competitive, there would be involuntary unemployment? That, after all, is the basis of the principle of effective demand to which, as best one can tell, you remain faithful.

    What, therefore, in your eyes is Keynes missing? A fully specified GE model a la Arrow-Hahn?! The conclusions that Keynes reached on the importance of effective demand are not dependent on being a "special case" -- because of imperfect competition, sticky wages or the liquidity trap, as the bastard Keynesians argued. His is indeed the General case.

    It was encouraging, therefore, to read the report of Mervyn King's recent Lunch with the FT where he referred to how individuals holding money might well be reducing effective demand in the economy: “I suppose the thing I hadn’t really appreciated was a key part of the work of Keynes. It was the idea that the future is unknowable. Cash is fundamental to the day-to-day running of the economy. At the same time, it’s the asset that you move into when you have no idea what to do. That is why our monetary economy is so unstable.” This idea, of course, was stressed by Paul Davidson way back in 1972 -- flexible wages would be irrelevant to regaining full employment.

    ReplyDelete
    Replies
    1. In a baseline NK model, effective demand will determine output, but if demand is too low the reduction in labour demand will lead to a reduction in labour supply (hours worked) and lower real wages. I think many NK economists 'read this' as mirroring a rise in involuntary unemployment in the real world (see, for example, daniels comment above), but my post was about when it might make a difference to actually model the rationing that this would involve.

      So what do you think is missing from the baseline NK model that means you would get involuntary unemployment even with a market clearing labour market?

      Delete
    2. "So what do you think is missing from the baseline NK model that means you would get involuntary unemployment even with a market clearing labour market?"

      Simon,

      Would you like to explain this (apparently, well it seems so to me at least) oxymoronic statement? How can a labour market that clears have involuntary unemployment? Surely the problem with all general equilibrium theories is that most markets don't clear.

      Delete
    3. Sorry for the delay. My approach, at least in this area, remains unrepentantly Marshallian, using "period analysis". Why? Because, despite all the rationing / Malinvaud stuff and (much else besides) starting in the late '70's, it's tough, impossible to find any model that incorporates a role for money. This is the other side of the coin from your question about a market clearing labour market: lower real wages also imply, usually, lower effective demand. Until this is adequately dealt with, macro models, of the type that are used, remain inadequate. It's also worth remembering that wage bargains are for nominal not real wages except if one uses the sleight of hand(!) as per much macro modelling.
      Microfoundations? Fiddling while Rome burns.....
      Following on from Cantab83, the standard question remains: if all prices are flexible, a la GE models, do we end up, even theoretically, in a world of full employment? You yourself have dealt recently with the Pigou effect. The real balance effect (a la Patinkin to start with) founders on the rock of bankruptcies as the real value of debts rises.
      So no, I don't have a direct, let alone adequate, answer to your question, Simon, since the specification of a competitive labour market for macro purposes continues to be lacking. This is why, for the past x years, the correct yardstick to judge macro analysis continues to be whether there is a proper understanding of the principle of effective demand. And no, demand ends up being insufficient not because of some “imperfection” here or there, or because we're in the “short run”. Wages, simply put, simplistically even, aren't just costs, especially in the aggregate.
      Am always glad to know what I’m missing, overlooking.

      Delete
    4. As Sam54 notes: "lower real wages also imply, usually, lower effective demand."

      I would go further however. Implicit within GE theory is an assumption as I see it that consumer demand will be infinite if the price is low enough. Thus markets will always clear if the price is low enough. Therefore sticky prices lead to unemployment. Yet this fails to appreciate the effect that the inequality of wealth and income plays in distorting GE. GE implies or assumes that the rich can always be persuaded to purchase ever more goods or services in order to clear the labour market if the unemployed simply offer the right goods to the right people (matching) or lower their labour prices (eliminate sticky prices in wages).

      Unfortunately the wealthy do not have an infinite desire to consume. Their consumption function declines with increasing wealth while their propensity to invest increases. That is why, as Hyman Minsky notes in his book on Keynes, in Keynes's General Theory only 43 pages are devoted to the consumption function (Book 3), whereas 114 pages are devoted to the topic of investment (Book 4). GE and RBC theory completely fail to account for the role of money and investment in determining output.

      In fact both current and past experience suggests that the rich would much prefer to lend money to governments to spend on welfare than they would spending larger sums themselves to employ the unemployed to provide goods they don't want and services they don't need.

      Delete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.