Several economists, including Brad DeLong and Paul Krugman, have commented on how macroeconomics developed in the late 1970s. There are many points on which we agree, and a few that merit some additional attention.
Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.
Lucas and Sargent were wrong when they claimed that the new type of model that they offered as an alternative already offered advice to policy makers. By leaping too soon to policy advice, before the new models had even begun to be vetted by the scientific process, they undermined their scientific agenda.
Robert Solow’s had a choice about how to respond. He chose sarcastic denial over serious engagement. His optimistic assessment of the prospects for the simulation models, a grade of B or B- but nothing “in that record that suggests suicide,” is hard to reconcile with the decision by virtually all macroeconomists to abandon work on them.
Solow probably responded harshly and defensively because he was worried about the possibility that people who could influence policy, such as the economists at the conference who worked in the Federal Reserve System, would accept the policy advice that Lucas and Sargent offered. He wrote, for example, that the average citizen should understand that in the battle over models, “the bullets are real and they may soon be fired at you by the Federal Reserve.”
Policy was the one dimension along which the large simulation models, infused with expert judgment, were ahead of the simple, applied general equilibrium (or SAGE) models. Because he thought contest was on for the hearts and minds of policy makers, Solow apparently decided that he could not trust the process of science. After all, you never know what these young economists, excited about their mathematical tools, will do with them. So he fell back on the rhetoric of debate and politics.
In so doing, he used the same techniques that economists from Cambridge England used to attack his model of output as a function of a stock of capital. Joan Robinson probably had the same concern. What will young Samuelson and Solow do with all their maths? Because an aggregate production function might lend support for a marginal productivity theory of the distribution of income, perhaps we should strangle it in the crib.
After the premature leap to policy by Lucas and Sargent, Solow’s response did two types of damage.
It set back the dynamic of science. After Lucas and Sargent’s sweeping policy claim and Solow’s dismissive response, there was no way forward. What theory or evidence does one cite in response to a position stated using the words, “yes dear, yes dear.”
It also undermined the quality of the policy advice that Solow was so concerned about. As a group, economists have influence over policy only if they have reached a consensus about what the effects of a policy will be. Solow’s intuition about fluctuations and policy may have been right, but scoring debating points and offering implausible assurances about the large simulation models was not an effective way to build a scientific consensus that supported that intuition.
To be sure, Solow did not cause the dysfunction that has characterized macro for the last 20 years. Responsibility for this has to lie with Lucas, Sargent, and their followers, who retreated from scientific engagement with any macroeconomist who disagreed with them and gave up on such basic scientific principles as using evidence to evaluate models.
Still, Solow did miss the opportunity to embrace SAGE models as a tool for codifying insights about fluctuations. He could have used a SAGE model to provide the intellectual foundation for the type of policy that he supported so passionately, an active government policy that reduced the needless, unfair suffering that an economic downturn causes.
What macroeconomists today can learn from this history is that Solow made the same mistake as Lucas and his supporters. They all leapt too soon into the domain of policy. They should have given the dynamic of science time to do its work.
In the summer of 1978, Lucas and Sargent were making three claims:
(a) Existing multi-equation macro simulation models were not identified. That is, these models summarized correlations in the data but did not yield reliable statements of the form “if the government does X, this will cause Y to happen.”
(b) It was time to use SAGE models to address such fundamental questions about economic fluctuations as why changes in the supply of money influence economic activity; and
© SAGE models will imply that an active monetary policy cannot stabilize economic fluctuations.
Solow thought that Lucas and Sargent were wrong about the policy ineffectiveness claim ©. DeLong, Krugman, and I all agree. In the 2013 introduction to his collected papers, Lucas uses some asides about the Great Depression and the Great Recession to admit that now even he agrees. Claim © is what DeLong and Krugman have in mind when they say that Solow was right and Lucas was wrong.
Yet all macroeconomists now agree that Lucas and Sargent were correct about the fatal problems with the large simulation models. Much of Solow’s response amounted to an implausible denial that there was anything wrong with them. So on this point, the roles are reversed. Lucas and Sargent were right and Solow was wrong.
My initial position
In my previous posts here and here, I suggested that someone in Solow’s position could have responded differently to the critique by Lucas and Sargent. At a minimum, he could have acknowledged that there were problems with the existing models and that it might be prudent to begin exploring alternatives. If he had, macroeconomics might now be in better shape because Lucas and his followers might not have concluded that it was impossible to reason with macroeconomists who disagreed with them.
DeLong and Krugman
DeLong and Krugman argue that Lucas and Sargent were already so wedded to assertion © about policy ineffectiveness that it would not have mattered if Solow had used a SAGE model to show that the policy ineffectiveness result was wrong.
Two pieces of evidence support their view:
There were economists at MIT (Rudi Dornbusch and Stan Fischer) who responded exactly as I suggest Solow could have and this did not prevent Lucas and his followers from closing ranks and cutting off communication with other macroeconomists.
Prior to 1978, Lucas had already used the kind of unhelpful debating language that I criticize Solow for using. Lucas wrote that a seminar audience would respond to Keynesian theorizing with “whispers and giggles.” His remarks were published in 1980, but in the introduction to his collected papers, Lucas indicates that they are from a talk he gave shortly after coming back to Chicago from Carnegie Mellon, which implies some time during the 1975-6 academic year. (Below, I’ll provide some context for this remark.)
In 1978, Fischer and Dornbusch had just attained the rank of full professor. They did not attend the Boston Fed conference where Lucas and Sargent tangled with Solow. At that time, the external perception was that Solow was the more influential voice on macroeconomic theory and policy even though inside the department at MIT, Dornbusch and Fischer were the ones training the next generation of macroeconomists. So Solow’s response mattered.
My reading of what Lucas and Sargent wrote in 1978 suggests that they would have been willing to engage in a scientific exchange even if it ultimately undermined their assertion about policy ineffectiveness. I think that they cared more about following the Samuelson program (that is, using SAGE models) than the policy ineffectiveness result. But I have to admit that reasonable people can read the same evidence in different ways.
In 1978, Lucas and Sargent complained that it was their critics who were so wedded to a policy conclusion that they will not engage in a scientific discussion. When I compare what Solow says with what Lucas and Sargent say, Solow does seem to be one who is more firmly committed to a policy conclusion and less open to a scientific analysis of the strengths and weaknesses of the two types of models.
Even if I am correct that in 1978, Lucas and Sargent would still have been open to consider almost any type of SAGE model, even ones where policy mattered, this open attitude did not last long. Within a few years, Lucas and company had closed ranks behind Edward Prescott, committed their coalition to his real business cycle models, given up on the Samuelson program, abandoned the careful macro econometrics that Sargent had pioneered, started using theoretical arguments that were opaque. In short, they stopped doing science.
This was inexcusable. Other economists should not have accommodated it. At that time, I was not working on economic fluctuations. But in retrospect, I include myself among the broad group of macroeconomists who should be criticized for failing to be more vocal in opposition to the direction the new Chicago school of macroeconomics was pursuing.
Problems with the Large Simulation Models
Part of the problem with the large simulation models is that their treatment of identification was not even remotely credible. Conditional forecasts were frequently wrong. The most obvious examples were forecasts made on the basis of the correlations of a traditional Phillips curve, but there were other signs of trouble. Lucas and Sargent from their 1978 paper::
In fact, however, the track record of the major econometric models is, on any dimension other than very short-term unconditional forecasting, very poor. Formal statistical tests for parameter instability, conducted by subdividing past series into periods and checking for parameter stability across time, invariably reveal major shifts (for one example, see ). Moreover, this difficulty is implicitly acknowledged by model-builders themselves, who routinely employ an elaborate system of add-factors in forecasting, in an attempt to offset the continuing “drift” of the model away from the actual series.
You have to be an old guy to remember these details, but back then, macroeconomists took it for granted that the builders of large macro models would introduce “add-factors” that they could use to make seat-of-the-pants adjustments so that things would come out right. A modern macroeconomist can think of adjusting the add-factors as an early form of calibration.
Experts with good intuition and judgment might have been able to use some combination of the simulation model and their own judgment to make reasonable forecasts or to provide reasonable policy advice. But expert judgment is not science. Science requires that intuition and judgment be codified, communicated, and verified by others. Large computer simulation models were useless for this purpose. No one could understand simulation models that were so complicated.
My hunch is that what killed the large models was this mismatch between the model and the mind. To get a sense for the magnitudes, consider this example. At the same conference where Lucas and Sargent presented their paper, Ray Fair presented a paper that described his version of such a model. It covered 23 years of data. It had 97 equations and 188 estimated coefficients. (I don’t know whether he had add-factors and if so whether this count of coefficients included them.)
The problems with identification and parameter instability were real but they were old news. Economists working with these models must soon have realized that they were useless as tools for codifying and communicating insights.
I suspect that what triggered the the collapse was the demonstration by Lucas that SAGE models were a viable alternative. Codifying and communicating basic insights is what simple models, or in Krugman’s terminology, silly models, are good for. Codifying and communicating is how science works toward consensus.
Samuelson had shown the power of SAGE models for understanding international trade. Solow showed their value for understanding growth. After Lucas (1972) demonstrated that they could address real questions about economic fluctuations, there was no holding macroeconomists back. How could you get a bright Ph.D. candidate who had read Lucas (1972) to write a thesis on a refinement of equation #62 out of 97?
The Lucas quote about “whisper and giggle”
The Lucas quote that Krugman cites is an example of what I am calling debating tactics, or as Krugman calls it, “trash-talk.” This was just as harmful as the remarks from Solow that I’ll quote below, but oddly, in this case all the damage seems to have been done to Lucas’s home team. The new Keynesian economists who were subjected to ridicule paid little attention and certainly did not go all tribal.
On the other hand, Lucas’s followers apparently took this language as the official go-ahead for derision that could be directed at anyone doing macro who was not affirmatively on their side. Behaving this way dulls the mind. The followers who invoked this remark to criticize new Keynesian SAGE models apparently did not have the presence of mind to ask if the theorizing that Lucas was referring to concerned traditional multi-equation computer simulation models rather than the SAGE models that new Keynesians were developing.
It is clear from the context that Lucas was in fact referring to the large simulation models. This is the only way to make sense of his claim that young Ph.D. students were no longer doing this kind of theorizing. Giggle is an exaggeration, but is probably accurate to say that that members of the audience would have started whispering if someone presented a model with 97 equations and 168 estimated coefficients in a macro seminar when I was in graduate school at MIT from 1977-79, Queen’s University in 1979-80, or Chicago from 1980-82.
Advocacy for SAGE Models
In their 1978 paper, Lucas and Sargent express a willingness to engage seriously in a scientific discussion about how to build macro models that try to explain why money matters. In particular, they expressed interest in contract models, search models, expectations and learning, and types of imperfect information beyond that in the Lucas 1972 paper. Later, Lucas (with Golosov, JPE 2007) investigated a model with menu costs. Sargent wrote an entire book about the role of learning on the dynamics of inflation.
When Samuelson described the neoclassical synthesis, he carved out an exception for macroeconomic fluctuations, saying in effect that SAGE models were incapable of explaining these fluctuations. What Lucas did with his 1972 paper was to signal that with an extension that allowed for both uncertainty and time, and with such departures from the assumptions required for the welfare theorems as imperfect information, this carve-out was no longer necessary. He showed that simple models could make sense of both expectations and a form of monetary nonneutrality. After that, the race was on to come up with extended versions of these models that better captured the evidence about fluctuations.
My reading is that in 1978, Lucas and Sargent were committed above all else to the question of how to extend the SAGE models and use them to complete Samuelson’s program by getting rid of the carve-out for economic fluctuations. I see their claims about policy as attention-getting behavior that turned out to be a strategic mistake, not some nonnegotiable.
Based on my personal experience, I do not think that politics in the sense of democrats versus republicans was a factor at this time. When I arrived at Chicago in 1980, someone who seemed to know told me that Lucas was a Democrat. I have no idea if this was true, but in my two years there, I never saw or heard him say anything that lent support to or detracted support from this claim.
Because history casts a long shadow, I think it is worth getting the intellectual history of macroeconomics right. Solow’s choice about how to respond to critique of the large macro simulation models is every bit as important to this history as the subsequent decisions that Lucas and his followers made to withdraw from science.
I can’t reproduce verbatim the kind of remarks that I heard from Frank Hahn or Solow circa 1978, but you can get the flavor from the closing remarks that Solow gave at the conference after Lucas and Sargent presented their paper. I’ll quote Solow starting from the beginning and will not edit, but will interject some comments.
The group at this conference is fairly uniform. The speakers are all academic economists, especially if you count Geof Moore and Steve McNees as honorary academic economists. A nonprofessional would find this whole meeting very mysterious. The discussion is very abstract; it is full of insiders’ language; people break into hysterical laughter for incomprehensible reasons. There are also some people here who are more directly concerned with practical matters. There are even more such people out in the streets of Edgartown, and those are people who could not care less about rational expectations or even about irrational expectations or identifying restrictions, whatever those words mean.
Rhetorically, “identifying restrictions … whatever those words mean” is the functional equivalent of George W. Bush’s famous phrase “fuzzy math.” It is a signal that Solow is not going to address the substance of the Lucas and Sargent critique of identification in the large simulation models or their claims about the advantages of SAGE models.
Practical people have been led to believe, first, that economists knew all the answers, and now they seem to believe that economists know absolutely nothing or perhaps even know negative amounts about the determinants of inflation. I guess many practical people would like to know what the truth of the matter is, and whether economics offers any guidance out of what they perceive to be a mess. I would like to assure the practical people in this room and also the ones out in the streets of Edgartown that although the battles that are fought in conferences like this appear to be fought with antique pop guns, the bullets are real and they may soon be fired at you by the Federal Reserve.
Translation: “Academic models are dangerous because they can encourage the adoption of bad policies.”
I am supposed to give my impression of where this conference leaves us, and Bill Poole will, of course, say exactly the opposite in a few minutes. Naturally I begin with my opinions, and I have to confess that I haven’t had any blinding revelations in the last two mornings; but I have learned some useful things.
What really brings us here is Steve McNees’ picture of the 1960s and the 1970s. In opening the conference, Frank Morris mentioned his disappointment or disillusionment - which many others share - that the analytical success of the 1960s didn’t survive that decade. I think we all knew, even back in the 1960s, that as Geof put it, “inflation doesn’t wait for full employment.” These days inflation doesn’t even seem to care if full employment is going along on the trip. McNees documented the radical break between the 1960s and 1970s. The question is: what are the possible responses that economists and economics can make to those events?
Translation that channels Frank Hahn (but you have say this to yourself in voice that dripping with condescension): “My dear boy, … we knew all along that there was an issue with the Phillips curve.”
One possible response is that of Professors Lucas and Sargent. They describe what happened in the 1970s in a very strong way with a polemical vocabulary reminiscent of Spiro Agnew. Let me quote some phrases that I culled from their paper: “wildly incorrect,” “fundamentally flawed,” “wreckage,” “failure,” “fatal,” “of no value,” “dire implications,” “failure on a grand scale,” “spectacular recent failure,” “no hope.” Now if they were doing that just to attract attention, for effect, so that people don’t say “yes, dear, yes, dear,” then I would really be on their side.
Even in 1978, the phrase “yes dear, yes dear” must have sounded at least a little bit inappropriate.
Every orthodoxy, including my own, needs to have a kick in the pants frequently, to prevent it from getting self-indulgent, and applying very lax standards to itself. But I think that Professors Lucas and Sargent really seem to be serious in what they say, and in turn they have a proposal for constructive research that I find hard to talk about sympathetically. They call it equilibrium business cycle theory, and they say very firmly that it is based on two terribly important postulates - optimizing behavior and perpetual market clearing. When you read closely, they seem to regard the postulate of optimizing behavior as self-evident and the postulate of market-clearing behavior as essentially meaningless. I think they are too optimistic, since the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false. The assumption that everyone optimizes implies only weak and uninteresting consistency conditions on their behavior. Anything useful has to come from knowing what they optimize, and what constraints they perceive. Lucas and Sargent’s casual assumptions have no special claim to attention. Even apart from all that, I share Franco Modigliani’s view that the alarmism, the very strong language that I read to you, simply doesn’t square with what in fact actually happened. If you give grades to all the standard models, some will get a B and some a B minus on occasion, especially for wage equations, but I don’t see anything in that record that suggests suicide.
Anyone who was committed to the ambition of extending Samuelson’s program and using SAGE models to understand a concept like expectations would surely have been at a loss about how to respond to such remarks as these.
Given how quickly and decisively economists abandoned the large computer simulation models, Solow’s grade of “B or B-” sounds way off the mark. Solow surely understood that what Lucas and Sargent were saying about the lack of identification was right. And from his success in growth theory, he knew the value of SAGE models. So why did he respond this way?
As I have already suggested, the likely explanation is that Solow was trying to defend active monetary policy. He probably thought that in the hands of an expert who has looked at a broad range of evidence and who could make use of the judgment distilled from that evidence to adjust the add-factors and nudge a simulation model in the right direction, the result would be a better answer to a policy question than what one would derive by taking a simple/silly GE model literally. But because this proposal–leave well enough alone and just trust the experts–is inconsistent with the process of science, he defends it using debating tactics.
The tip-off about the role of judgment and other types of evidence comes when he comments on the importance of downward rigidity in nominal wages, a remark that looks prescient in light of the recent experience with a deep recession and low inflation. Quoting again from Solow’s concluding remarks:
I even have trouble with the vertical long-run Phillips curve. I see its attractions very clearly, and I saw them at the very beginning. In fact, there is a peculiar inner conflict here. Deep down I really wish I could believe that Lucas and Sargent are right, because the one thing I know how to do well is equilibrium economics. The trouble is I feel so embarrassed at saying things that I know are not true. The long-run vertical Phillips curve seems so inevitable. On the other hand, nobody believes the deflationary half of the proposition. I don’t know anybody who would even lie out in the sun, let alone be burned at the stake, for the belief that if the unemployment rate is U* [e.g. the NAIRU] plus epsilon and we wait long enough, there would be accelerating deflation. That part no one believes.
Here Solow makes a very important point, one that deserves to be repeated, and one that supporters of Samuelson’s SAGE program may not have emphasized enough. Models can be helpful tools, but in the end, facts always trump a model.
Chemists don’t like having a periodic table with an unwieldy number of elements. Physicists do not like being stuck with a theory of quantum mechanics that no one can understand. I loved the mathematics of convex duality and perfect competition and resisted the introduction of nonconvexities that could capture the economics of ideas. But scientists do not get to pick the models that they like best. Facts are facts. The right model is one that best fits the facts.
I think that Solow believed that it was a fact that workers get angry if an employer cuts the nominal wage. Seems like a fact to me.
The joke that Solow uses about “even lie out in the sun” is telling. This is not trash talk or a debating tactic. It seems more like a sign of insecurity. Solow saw no way to reconcile this fact about nominal wages with prevailing economic theory. Ordinarily, when the model does not fit the facts, you change the model. “Actually, Dr. Einstein, god does play dice with the universe.” But Solow hesitates. He does not make the obvious remark, “there is something wrong with how we have been doing equilibrium theory.”
I can’t help wishing that Solow had bet on science and had done what Lucas did–make himself vulnerable by writing down a simple/silly SAGE model that puts his fact front and center. To be sure, doing this would have left him open to ridicule. This is part of the package one has to buy into to take advantage of simple/silly models.
Suppose it is true that workers get angry when their employer changes the terms of a contract specified in nominal terms. Suppose it is also true that SAGE models are the right tools for codifying and communicating scientific insights. Then it must be possible to formulate a SAGE model that accurately captures the facts about nominal wage rigidity.
The evidence suggests that workers react differently if inflation reduces the real wage and the employer leaves the nominal wage unchanged. From a consequentialist perspective, this is logically inconsistent, but there are lots of other indications that human moral systems do not follow a consequentialist logic. (See for example the discussion of the moral dilemma known as the trolley problem.) Our moralistic preferences are what they are. Facts are facts.
Solow had already served on the front lines of theory, fighting to make the world safe for aggregate production functions, so one can understand his hesitancy about volunteering to write down the first macro model based on moralistic preferences. But would the reception have been so bad?
In a series of papers, Julio Rotemberg has used models with moralistic preferences to re-examine wage and price setting, (See NBER 13755 or here. You can find some of his related work here.) I used a simple model of moralistic behavior to discuss the history of the Social Security System and politics of entitlement. I would not say that these papers took the profession by storm, but both of us still have jobs. (See, now I’m the one trying to hide behind a joke. Why are we so insecure about saying that we need a model that reflects how people behave? Who wrote the part of the SAGE model style guide that says we can make “wildly unrealistic” assumptions about almost anything else but cannot make well founded assumptions about human motivation?)
Akerlof, Dickens and Perry, Altonji and Devereux, and Bewley, all provide evidence of downward nominal rigidity of wages. Bewley deserves particular credit for following up on another of Solow’s points, that we can collect important evidence by talking to people. Akerlof, Dickens and Perry go on to show how wage rigidity generates aggregate effects.
So back in 1978, instead of simply going all Joan Robinson, Solow could have developed a SAGE model with nominal rigidities that are derived from moralistic preferences. Then his response to Lucas and Sargent could have been:
“Fair enough. I got to use a SAGE model for growth. You get to use a SAGE model for fluctuations. But so do I, and my SAGE model of fluctuations fits the facts better than yours. And mine shows that active stabilization policy Pareto dominates passive policy.”
If he had responded this way, and if I’m right that what mattered to Lucas and Sargent then was the Samuelson program and SAGE models, they might have listened and taken this alternative SAGE model seriously. Even if they did not, other macroeconomists surely would have. The scientific consensus about the importance of nominal rigidities that could then emerged as a result might then have mattered when government officials had to decide how to respond to a serious recession at a time of low inflation. This consensus might have supported a more active, and more efficient, response to the Great Recession.
Sticking to Science
So why bother with all this oldster inside baseball? Because we can learn from history. What it teaches us is that economists should have faith that the social process we call science will get to the right answers and should be patient enough to let that process work.
The deep insight of the Samuelson program was that simple/silly applied general equilibrium models are a powerful way for economists to codify and exchange ideas. Solow deserves credit for showing their power in the theory of growth. Lucas deserves credit for showing their power in the theory of economic fluctuations.
Most first-draft SAGE models that economists explore will turn out to be wrong. Knowledge accumulates through a process of being silly enough to be precise, and then culling out the silly/precise models that do not fit the facts well enough. The process is messy and takes its time, but it works.
Sticking to the social process of science involves a few common-sense principles.
1) For the questions economists work on, there is one truth. People will treat you like some country bumpkin, unschooled in the verbal status contests of philosophy, if you say this out loud: “My dear boy …” But say it to yourself.
2) Science is the only social process that has ever achieved a voluntary consensus among large numbers of people. It builds consensus by making credible progress toward that one truth.
3) No person has privileged access to the truth, no matter how impressive his or her accomplishments might be. Critical scrutiny of everyone’s work is an essential part of the process of making sure that claims are stated precisely, and culling out the precisely stated claims that do not fit the facts.
4) To sustain a division of labor, trust but verify. Verifying tends to be more expensive than constructing a phony argument or fabricating some evidence; and obfuscation can dramatically raise the cost of verifying. So a scientific claim has to be backed by a person and it will be more credible if the person has a reputation for clarity, precision, and integrity.
5) Scientists have to codify and communicate; mathematical models can help them do so with clarity and precision.
6) Facts always trump models.
Finally, it is good to remember that macroeconomists work on important problems and that important problems are difficult. Keeping this in mind can foster humility, tolerance, and a thick skin.