Winner of the New Statesman SPERI Prize in Political Economy 2016


Thursday, 30 August 2012

The pernicious politics of immigration


There can be a rational debate about the costs and benefits of immigration, and what that implies about immigration controls. And there can be political debate, which is nearly always something different. I know this is an issue in the US, but I suspect we in the UK have probably more experience in how to be really nasty to foreigners who want to come here.

Probably most UK academics have some experience of how the UK authorities deal with student visas. A recent case I was involved with concerned a student who had their visa refused because of a mistake that the immigration officials acknowledged was their own. However they would only overturn the decision if the student went through an expensive appeals process, or reapplied through a solicitor, which was still expensive but less so. They did the latter, successfully and with university support, but the whole process took time and caused considerable distress. Not only did the bureaucracy make a mistake, it also made the innocent party pay for the bureaucracy’s mistake.

The latest example is the UK Border Agency’s decision to revoke the right of the London Metropolitan University to sponsor students from outside the EU. The Agency has problems with the university’s monitoring of these students. Whether or not the Agency has a case against the university, the decision means that over 2,500 students, many of whom are midway through their course, have 60 days to find an alternative institution to sponsor them or face deportation. The Agency has no reason to believe, and has not claimed, that the majority of them are not perfectly genuine students who have paid good money to study in the UK. The Agency does not have to punish innocent students to punish the university. I guess it might call them collateral damage, but in this case the damage seems easily avoidable.

The government has apparently set up a ‘task force’ to help these students. Its work will not be easy, but it is certainly not going to make the emotional distress these students are currently suffering go away. What it does illustrate is that this decision is no unhappy accident due to an overzealous arm of government. It looks like a deliberate government attempt to show that it is being ‘tough on immigration’.

Aside from the human cost, there is the economic damage this does to an important UK export industry. There are around 300,000 overseas students in the UK. Universities UK estimates that these students contribute £5 billion a year (0.3% of GDP) in fees and off-campus expenditure. Unlike the rest of the UK economy, this is an export industry that has been growing rapidly, but in a highly competitive market. Changes to visa regulations already announced has led to study visas issued in the year to June 2012 falling sharply compared with the previous 12 months. It is pretty obvious what impact the most recent decision involving London Met will have on prospective students trying to decide whether to come to the UK or go elsewhere.
    
Student visas are not the only area involving immigration where rational argument and sensible cost benefit analysis (of the economic or more general kind) goes out of the window when political decisions are made. Jonathan Portes notes renewed pressure from parts of government to further deregulate the UK labour market. While this seems a little strange for a labour market which is much less regulated than most in Europe, it ignores the huge increase in regulation the government has created as a result of tightening immigration rules. He says "The extra employment regulation that the Government has imposed on employers wishing to employ migrant workers—the cap on skilled migration—will, using the Government's own methodology, reduce UK output by between £2 and 4 billion by the end of the Parliament."

Numbers like this are important, and it makes you wonder how serious the government is about doing everything it can to get the economy moving again. But what really makes me angry is the human misery this kind of decision causes. Having seen one case at first hand, I can imagine what 2,500 others are currently going through. But of course they do not have a vote, and it would seem that in the eyes of the Minister responsible, Damian Green, the votes he thinks he has gained by this decision are worth this collateral damage.

Arguments for ending the microfoundations hegemony


Should all macroeconomic models in good journals include their microfoundations? In terms of current practice the answer is almost certainly yes, but is that a good thing? In earlier posts I’ve tried to suggest why there might be a case for sometimes starting with an aggregate macro model, and discussing the microfoundations of particular relationships (or lack of) by reference to other papers. This is a pretty controversial suggestion, which will appear for many to be a move backwards not just in time but in terms of progress. As a result I started with what I thought would be one fairly uncontroversial (but not exactly essential) reason for doing this. However let me list here what I think are the more compelling reasons for this proposal.

1) Empirical evidence. There may be strong empirical evidence in favour of an aggregate relationship which has as yet no clear microfoundation. A microfoundation may emerge in time, but policy makers do not have time to wait for this to happen. (It may take decades, as in the microfoundations for price rigidity.) Academics may have useful things to say to policy makers about the implications of this, as yet not microfounded, aggregate relationship. A particularly clear case is where you model what you can see rather than what you can microfound. For further discussion see this post.

2) Complexity. In a recent post I discussed how complexity driven by uncertainty may make it impossible to analytically derive microfounded relationships, and the possible responses to this. Two of the responses I discussed stayed within microfoundations methodology, but both had unattractive features. A much more tractable alternative may be to work directly with aggregate relationships that appear to capture some of this complexity. (The inspiration for this post was Carroll’s paper that suggested Friedman’s PIH did just that.)

3) Heterogeneity. At first sight heterogeneity that matters should spur the analysis of heterogeneous agent models of the kind analysed here, which remain squarely within the microfoundations framework. Indeed it should. However in some cases this work could provide a rationalisation for aggregate models that appear robust to this heterogeneity, and which are more tractable. Alan Blinder famously found that there was no single front runner for causes of price rigidity. If this is because an individual firm is subject to all these influences at once, then this is an example of complexity. However if different types of firm have different dominant motives, then this is an example of heterogeneity. Yet a large number of microfoundations for price rigidity appear to result in an aggregate equation that looks like a Phillips curve. (For a recent example, see Gertler and Leahy here.) This might be one case where working with aggregate relationships that appear to come from a number of different microfoundations gives you greater generality, as I argued here.

4) Aggregate behaviour might not be reducible to the summation of individuals optimising. This argument has a long tradition, associated with Alan Kirman and others. I personally have not been that persuaded by these arguments because I’ve not seen clear examples where it matters for bread and butter macro, but that may be my short-sightedness.

5) Going beyond simple microeconomics. The microeconomics used to microfound macromodels is normally pretty simple. But what if the real world involves a much more substantial departure from these simple models? Attitudes to saving, for example, may be governed by social norms that are not always mimicked by our simple models, but which may be fairly invariant over some macro timescales, as Akerlof has suggested. This behaviour may be better captured by aggregate approximations (that can at least be matched to the data) than a simple microfoundation. We could include under this umbrella radical departures from simple microfoundations associated with heterodox economists. I do not think the current divide between mainstream and heterodox macro is healthy for either side.

If this all seems very reasonable to you, then you are probably not writing research papers in the macroeconomics mainstream. Someone who is could argue that once you lose the discipline of microfoundations, then anything goes. My response is that empirical evidence should, at least in principle, be able to provide an alternative discipline. In my earlier post I suggested that the current hegemony of microfoundations owed as much to a loss of faith in structural time series econometrics as it did to the theoretical shortcomings of non-microfounded analysis. However difficulties involved in doing time series econometrics should not mean that we give up on looking at how individual equations fit. In addition, there is no reason why we cannot compare the overall fit of aggregate models to microfounded alternatives.

While this post lists all the reasons why sometimes starting with aggregate models would be a good idea, I find it much more difficult to see how what I suggest might come about. Views among economists outside macro, and policy makers, about the DSGE approach can be pretty disparaging, yet it is unclear how this will have any influence on publications in top journals. The major concern amongst all but the most senior (in terms of status) academic macroeconomists is to get top publications, which means departing from the DSGE paradigm is much too risky. Leaders in the field have other outlets when they want to publish papers without microfoundations (e.g. Michael Woodford here).

Now if sticking with microfoundations meant that macroeconomics as a whole gradually lost relevance, then you could see why the current situation would become unsustainable. Some believe the recent crisis was just such an event. While I agree that insistence on microfoundations discouraged research that might have been helpful during and after the crisis, there is now plenty of DSGE analysis of various financial frictions (e.g. Gertler and Kiyotaki here) that will take the discipline forward. I think microfoundations macro deserves to be one of, if not the, major way macro is done. I just do not think it is the only route to macroeconomic wisdom, but the discipline at the moment acts as if it is.

Saturday, 25 August 2012

Costing Incomplete Fiscal Plans: Ryan and the CBO


Some of the regular blogs I read are currently preoccupied (understandably) with the US Presidential election. This is not my territory, but the role of fiscal councils – in this case the CBO – in costing budget proposals is, and the two connect with the analysis of the Ryan budget plan. The Ryan ‘plan’ involves cutting the US budget deficit, but contains hardly any specifics about how that will be done.

There is nothing unique to the US here. In the 2010 UK elections, both main parties acknowledged the need for substantial reductions in the budget deficit over time, but neither party fully specified how these would be achieved. Now as the appropriate speed of deficit reduction was a key election issue, this might seem surprising. In particular, why did one party not fully specify its deficit reduction programme, and then gain votes by suggesting the other was not serious about the issue?

The answer has to be that any gains in making the plans credible would be outweighed by the political costs of upsetting all those who would lose out on specific measures. People can sign up to lower deficits, as long as achieving them does not involve increasing their taxes or reducing their benefits. However, I think it’s more than this. If people were fully aware of the implications of what deficit reduction plans might entail, you would guess that lack of information might be even more damaging than full information. As people tend to be risk averse, the (more widespread) fear that their benefits might be cut could be more costly in electoral terms than a smaller number knowing the truth.

The fact that this logic does not operate suggests to me that (at least among swing voters) there is a bigger disconnect in people’s minds between aggregate deficit plans and specific measures. Saying you will be tough on the deficit does not panic swing voters, but adds to your credibility in being serious about the deficit ‘problem’. Indeed, from my memory of the UK election, claims by one side about secret plans of the other were effectively neutralised as scaremongering.

This can be seen as the reverse side of a familiar cause of deficit bias. A political party can gain votes by promising things to specific sections of the electorate, but does not lose as many votes because of worries about how this will be paid for. The media can correct this bias by insisting on asking where the money will come from (or in the reverse case, where the cuts will come from), but they may have limited ability to check or interrogate the answer. This is where a fiscal council, which has authority as a result of being set by government but also independent of government, can be useful.

For some time the Netherlands Bureau for Economic Policy Analysis (often called the CPB) has offered to cost political parties fiscal proposals before elections. The interesting result is that all the major parties take up this offer. Not having your fiscal plans independently assessed appears to be a net political cost.

What the fiscal council is doing in this case is conferring an element of legitimacy on aggregate fiscal plans, a legitimacy that is more valuable than uncosted fiscal sweeteners. Which brings me to the question of what a fiscal council should do if these plans are clearly incomplete? In particular, suppose plans include some specific proposals that are deficit increasing or neutral, but unspecified plans to raise taxes or cut spending which lead to the deficit being reduced. By ‘should do’ here I do not mean what it is legally obliged to do, but what would be the right thing to do.

It seems to me clear that the right thing to do is not to cost the overall budget. What, after all, is being achieved by doing so? Many people or organisations can put a set of numbers for aggregate spending and taxes into a spreadsheet and calculate implied deficits, and the adding up can easily be checked. By getting the fiscal council to do this fairly trivial task serves no other purpose than to give the plan a legitimacy that it does not have.

In this situation, a fiscal council that does calculate deficit numbers for a plan that leaves out all the specifics is actually doing some harm. Instead of asking the difficult questions, it is giving others cover to avoid answering them. It is no excuse to say that what was done is clear in the text of the report. The fiscal council is there partly so people do not have to read the report. So I wonder if the CBO had any discretion in this respect. If Ryan was playing the system, perhaps the system needs changing to give the CBO a little more independence. 

Friday, 24 August 2012

Multiplier theory: one is the magic number


I have written a bit about multipliers, particularly of the balanced budget kind, but judging by comments some recap and elaboration may be useful. So here is why, for all government spending multipliers, one is the number to start from. To make it a bit of a challenge (for me), I’ll not use any algebra.

Any discussion has to be context specific. Imagine a two period world. The first period is demand deficient because interest rates are stuck at the zero lower bound[1], but in the (longer) second period monetary policy ensures output is fixed at some level independent of aggregate demand (i.e. its supply determined). Government spending increases in period 1 only. That is the context when these multipliers are likely to be important as a policy tool.

1) Balanced budget multiplier

To recap, for a balanced budget multiplier (BBM), here is a simple proof in terms of sector balances for a closed economy. A BBM by definition does not change the public sector’s finance balance (FB). It seems very reasonable to assume that consumers consume a proportion less than one of any change to their first period post-tax income. So if higher taxes reduced income, their consumption falls by less, so their FB moves into deficit. But as the sum of the public and private sector’s FB sums to zero, it cannot do this. So post-tax income cannot fall. Hence pre-tax income must rise to just offset the impact of higher taxes. The BBM is one.

The nice thing about this result is that it holds whatever fraction of current income is consumed (as long as it’s less than one), so it is independent of the degree of consumption smoothing. What about lower consumption in the second period? No need to worry, as monetary policy ensures demand is adequate in the second period.

Although a good place to start, allowing for an impact on expected inflation and therefore real interest rates will raise this number above one. In addition, as DeLong and Summers discuss, hysteresis effects will also raise period 2 output and income from the supply side, some of which consumers will consume in period 1. We would get similar effects if the higher government spending was in the form of useful intrastructure investment. So in this case one is the place to start, but it looks like a lower bound.

2) BBM in an open economy

I’m still seeing people claim that the BBM in an open economy is small. It could be, if the government acts foolishly. Suppose the government increases its spending entirely on defence, which in turn consists of buying a new fighter jet from an overseas country. The impact on the demand for domestic output is zero. But consumers are paying for this through higher taxes, so their spending decreases – we get a negative multiplier.

Now consider the opposite: the additional government spending involves no imported goods whatsoever. The multiplier is one. You can do the maths, but it is easy to show that this is a solution by thinking about the BBM in a closed economy. There consumption does not change, because a BBM=1 raises pre-tax income to offset higher taxes. But if consumption does not change, neither will imports, so this is also the solution in the open economy case.

What the textbooks do is apply a marginal propensity to import to total output, which implicitly assumes that the same proportion of government spending is imported as consumption spending. For most economies that is not the case, as the ‘home bias’ for government spending is much larger. Furthermore, if the government is increasing its spending with the aim of raising output, it can choose to spend it on domestically produced output rather than imports. So, a multiplier of one is again a good place to start. Allowing some import leakage will reduce the multiplier, but this could easily be offset by the real interest rate effects discussed above, particular as these would in an open economy depreciate the real exchange rate.

3) Debt financed government spending with future tax increases

Although this is the standard case, from a pedagogical point of view I think it’s better to start with the BBM, and note that it’s all the same with Ricardian Equivalence. We can then have a discussion about which are the quantitatively important reasons why Ricardian Equivalence does not hold. All these go to raise the multiplier above one. You have to add, however, some discussion about the impact that distortionary tax increases will have on output in the second period, which reduces second period output and, through consumption smoothing, the size of the first period multiplier. 

4) Debt financed government spending without tax increases

In an earlier post I queried why arguments for the expansionary impact of government spending increases always involved raising taxes at some point. For debt finance, why not assume lower government spending in the future rather than higher taxes. The advantage is that you do not need to worry about supply side tax effects. Monetary policy ensures there is no impact on output of lower government spending in the second period. Now, unlike the BBM case, we do need to make some assumptions about the degree of consumption smoothing. If you think the first period is short enough, and consumers smooth enough, such that the impact of higher income on consumption in the first period is negligible, then we have a multiplier of one again.


[1] I assume Quantitative Easing cannot negate the ZLB problem, and that inflation targets are in place and fixed. This is not about fiscal stimulus versus NGDP targeting, but just about macro theory.

Thursday, 23 August 2012

Hayek versus Keynes and the Eurozone


The editors of the EUROPP blog, run by the Public Policy Group at the London School of Economics, wanted to contrast Hayekian and Keynesian views of the Eurozone crisis, by running posts from either side. Here is the Hayekian view, from Steven Horwitz, and for better or worse I provide the Keynesian view here. To be honest it is my view of the Eurozone crisis, which I think owes a lot to Keynesian ideas – it is absolutely not an attempt to guess what Keynes would have said if he could speak from the grave.

While regular readers of my blog will not find anything very new here, I personally found it useful to put my various posts into a brief but coherent whole. What struck me when I did so was the gulf between my own perspective (which is not particularly original, and borrows a great deal from the work of others like Paul De Grauwe), and that of most Eurozone policymakers. It is a gulf that goes right back to when the Euro was formed.

Much of the academic work before 2000 looking at the prospects for the Euro focused on asymmetric or country specific shocks, or asymmetric adjustment to common shocks due to structural differences between countries. My own small contribution, and those of many others, looked at the positive role that fiscal policy could play in mitigating this problem. Yet most European policymakers did not want to hear about this. Instead they were focused on the potential that a common currency had for encouraging fiscal profligacy, because market discipline would be reduced.

Now this was a legitimate concern – as some Greek politicians subsequently showed. However what I could not understand back then, and still cannot today, is how this concern can justify ignoring the problem of asymmetric shocks. I can still remember my surprise and incomprehension when first reading the terms of the Pact – what were Eurozone policymakers thinking? My incredulity has certainly been validated by events, as the Eurozone was hit by a huge asymmetric shock as capital flowed into periphery countries and excess demand there remained unchecked. Now countercyclical fiscal policy in those countries would not have eliminated the impact of that shock, at least not according to my own work, but it would have significantly reduced its impact.

When I make this point, many respond that fiscal policy in Ireland or Spain was probably contractionary during this time – am I really suggesting it should have been tighter still? Absolutely I am, and the fact that this question is so often asked partly reflects the complete absence of discussion of countercyclical fiscal policy by Eurozone policymakers. Brussels was too busy fretting about breaches of the SGP deficit limits, and largely ignoring the growing competitiveness divide between Germany and most of the rest. (Maybe this is a little unfair on the Commission. I have been told that when the Commission did raise concerns of this kind, they were dismissed by their political masters.)

If periphery countries had pursued aggressive countercyclical fiscal policies before 2007, would the Eurozone crisis have started and ended with Greece? Who knows, but it certainly would have been less of a crisis than the one we have now.

This is just one aspect of the policy failure that is the Eurozone crisis. Another is the fiction of expansionary austerity, and yet another is the obsession by the ECB with moral hazard (or even worse their balance sheet). As I say at the end of my EUROPP post, there is a pattern to all these mistakes. It reflects a world view that governments are always the problem, and private sector behaviour within competitive markets never requires any intervention. Whether you attribute that view to Hayek, or Ordoliberalism, or something else is an interesting academic question. But what the Eurozone crisis shows all too clearly is the damage that this world view can do when it becomes the cornerstone of macroeconomic policy.

Monday, 20 August 2012

Facts and Spin about Fiscal Policy under Gordon Brown


Below is a chart of UK net debt to GDP from the mid 1970s until the onset of the Great Depression. This post is about the right hand third of this chart, from 1998 to 2007, which was the period during which Gordon Brown was Chancellor. 

UK Net Debt as a Percentage of GDP (financial years) – Source OBR

In general looking at figures for debt can give you a rather misleading impression of what fiscal policy is doing, particularly over short intervals. However, having finished trawling through budget reports and other data for a paper I am writing, I can safely say that this chart tells a pretty accurate story. (For those who cannot wait for the detail that will be in my paper, there is an excellent account by Alan Budd here.) In the first two years of his Chancellorship, Brown continued his predecessor’s policy of tightening fiscal policy. The budget moved into small surplus, so that the debt to GDP ratio fell to near 30% of GDP. Policy then shifted in the opposite direction, with a peak deficit of over 3% of GDP, a period which included substantial additional funding to the NHS. The remaining five budgets were either broadly neutral or mildly contractionary in the way they moved policy, but as this was starting from a significant deficit, the net result was a continuing (if moderating) rise in debt.

Why was fiscal policy insufficiently tight over most of this period? Despite what Gordon Brown said at the end of his term, I do not think this had anything to do with the business cycle. In one sense there is nothing unusual to explain: we are used to politicians being reluctant to raise taxes by enough to cover their spending, which leads to just this kind of deficit bias. However this should not have happened this time because policy was being constrained by two fiscal rules designed to prevent this. So what went wrong with the rules?

The first answer is in one sense rather mundane. The rules, as all sensible fiscal rules should, tried to correct for the economic cycle. However, rather than use cyclically adjusted deficit figures, Gordon Brown’s rules looked at average deficits over the course of an economic cycle. That allowed Brown to trade off excessively tight policy in the early years against too loose policy towards the end, and still (just) meet his rule. As we can roughly see from the chart, debt ends up about where it started under his stewardship, which also roughly coincided with a full cycle.

Was this intended? The answer is to some extent not, which brings us to the second reason policy was too loose, and that is forecast error. One of the striking things about reading through the budget reports is how persistent these errors were. Outturns seemed always more favourable than expected over the first part of this period, until they became persistently unfavourable in the second. The former encouraged forecasters to believe higher than expected tax receipts represented a structural shift, and they were reluctant to give up that view in the second period. Unlucky or an aspect of wishful thinking that is often part of deficit bias?

To their credit, the current Conservative led government learnt from both these mistakes. Most notably, they set up the independent Office for Budget Responsibility with the task of producing forecasts without any wishful thinking. In addition their fiscal mandate is also defined in terms of a cyclically adjusted deficit figure, which does not have the backward looking bias inherent in averaging over the past cycle. Their mistake is in trying to meet that mandate when the recovery had only just begun.

What this chart does not show are the actions of a spendthrift Chancellor who left the economy in a dire state just before the Great Recession. He stopped being Chancellor with debt roughly where it was when he started, and a deficit only moderately above the level required to keep it there. The spin that our current woes are the result of the awful mess Gordon Brown left the UK economy in is a distortion based on a half-truth. The half truth is that it would have been better if fiscal policy had been tighter, leaving debt at 30% rather than 37% when the recession hit. The distortion is that the high deficit and debt when labour left office in 2010 were a consequence of the recession, and commendable attempts to limit its impact on output and employment.

Saturday, 18 August 2012

The Lucas Critique and Internal Consistency


For those interested in microfoundations macro. Unlike earlier posts, I make no judgement about the validity or otherwise of the microfoundations approach, but instead just try and clarify two different motivations behind microfoundations.

When I discuss the microfoundations project, I say that internal consistency is the admissibility criteria for microfounded models. I am not alone in stressing the role of internal consistency: for example in the preface to their highly acclaimed macroeconomics textbook, Obstfeld and Rogoff (1996) argue that a key problem with the pre microfoundations literature is that it “lacks the microfoundations needed for internal consistency”. However when others talk about microfoundations, they often say they are designed to avoid the Lucas critique. This post argues that the latter is just a particular case of the former.

What do we mean when we say a model is internally consistent? Most obviously, we mean that individual agents within the model behave consistently in making their own decisions. A trivial example is if the model contains a labour supply equation and a consumption function that are supposed to represent the behaviour of the same agent. In that case we would want the agent to behave consistently. An agent that became more impatient, and so wanted to consume more by borrowing, but also wanted to work more hours (and so exhibit less impatience in their consumption of leisure), would appear to behave inconsistently unless their preferences or prices also changed.

Suppose instead of a labour supply equation, we had wage setting by unions. In this case we have a consistency issue between two sets of agents: consumers and unions. If we wanted to model unions as representing consumers as workers, we would want to align their preferences, so we are back to the previous case. However, there may be reasons why we do not want to do this. If we did not, we would want to make sure these agents interrelated in a sensible way.

What is meant by a sensible way? Consumer’s decisions will almost certainly depend on expectations about the wages unions set. Lucas called rational expectations a ‘consistency axiom’. If, for example, the union started being more concerned about employment than wages, we might expect consumers to recognise this in thinking about how their future income might evolve.

The Lucas critique is just an example of consistency between agents. The question is whether the private sector agents in the model react in a sensible way to policy changes. The classical example of the Lucas critique is inflation expectations. If monetary policy changes to become much harder on inflation, then rational agents will incorporate that into the way they form inflation expectations. A model that did not have that feedback would be ‘subject to the Lucas critique’.

Discussion of the Lucas critique often involves the need to model in terms of ‘deep’ parameters. A deep parameter (like impatience) is one that is independent of (exogenous to) the rest of the model. Here the parameters of the rule agents’ use to forecast inflation are not deep parameters, because (under rational expectations) they depend on how policy is made. But we can have a similar discussion about workers and unions: if the latter aimed at representing the former, then union attitudes to the wage/employment trade off should not be independent of worker preferences. Internal consistency is again more general than the Lucas critique.

Now obviously the Lucas critique is a particularly important kind of inconsistency if you are interested in analysing policy. But it is not the only kind of inconsistency that matters. A very good example of this is Woodford’s derivation of a social welfare function from the utility function of agents. Before this work, macroeconomists had typically assumed that a benevolent policy maker would minimise some quadratic combination of excess inflation and output, but this was disconnected from consumers’ utility. This had no bearing on the Lucas critique, which applies to any policy, benevolent or not. However it was a glaring example of inconsistency – why wasn’t the policy maker maximising the representative agent’s utility? After Woodford’s analysis, nearly every macroeconomics paper followed his example: not because it did anything about the Lucas critique, but because it solved an internal consistency issue.

Why does putting the Lucas critique in its proper place matter? I can think of two reasons. First, if you believe that avoiding the Lucas critique means you necessarily have a microfounded model, you are wrong. (In contrast, an internally consistent model will avoid the Lucas critique.) Second, it has a bearing on the idea often put forward that microfounded models are just for policy analysis, but not for forecasting. If we think that microfoundations is all about the Lucas critique, then this mistake is understandable (although still a mistake). But if microfoundations is about internal consistency, then it is easier to see how a microfounded model could be much better at forecasting as well as policy analysis.