I had hoped to find time to offer a more thoughtful response to Simon Wren Lewis’s most recent comments on the way forward in macroeconomics but life is intruding, and I now I owe a response to Ray Fair (after I look at the work that he points to.)
For now, I’ll go ahead with what I hope is a suggestion that could encourage some kind of consensus:
Perhaps the discussion about macro would benefit from a distinction like the one in biomedicine between bench science and clinical work.
In my interpretation, what Lucas and Sargent were trying to do in the 1970s was to develop the bench science side of macroeconomics. The neoclassical synthesis had declared the basic scientific questions about macroeconomic fluctuations out of bounds. These included how could an economy be stuck in a persistent, inefficient equilibrium? What departures from the assumptions for the first welfare theorem could explain an event the Great Depression? And why does the nominal quantity of money matter for any real outcome?
To attack these questions, he proposed a simple general equilibrium (SAGE) model that was very much in the spirit of the Samuelson program. (DSGE misses the importance of a model that is so simple it could even be called silly, as Krugman suggests. Moreover, the acronym has been so tainted by association with an RBC program that abandoned every one of the big basic science questions in macro that the decent thing to do might just be to stop using it.)
Lucas (1972) got things off to a very promising start. It offered both a technical advance–a tractable way to introduce uncertainty and expectations–and an initial conjecture about the fundamental imperfection–incomplete information. It also boasted prematurely about new insights into policy. If at this time, we had already established the distinction between bench science and clinical practice, this might have been recognized as harmless obiter dicta.
Assuming the profession can get back to generally sensible bench science inquiry into the basic scientific questions of macroeconomics (and of course, there were economists who kept doing good bench science on macro questions far away from the RBC reality distortion field), we could copy the quid-pro-quo that prevails in biomedicine: Bench scientists get the freedom to explore any question they want. In return, when they get a result that they think might have implications for clinical practice, the bench-scientists can’t just try to pull rank and order the clinicians to change to some new clinical protocol.
The bench scientists have to persuade other bench scientists first. Then the bench scientists together have to persuade the clinicians, and this will not in general, be an easy task. For every important bench-science insight (e.g. that clinicians should wash their hands, or that you can treat ulcers with antibiotics) there are countless episodes in which the bench scientists persuaded each other that they were onto something really big that turned out to be a whimper or simply wrong. The recent results from replications of experiments in psychology should not come as a surprise. They reflect the general pattern that is well known in medicine. The benefits claimed in pilot studies of are almost always revised down as experience with the treatment accumulates.
So the clinicians are going to be appropriately skeptical. This will irritate the bench scientists, but so what.
One way to interpret what I think Simon Wren-Lewis, Robert Waldman, Brad Delong, and Ray Fair have in mind when they suggest that something was lost during the new classical revolution would be to say that the sudden take up of such new methods as SAGE (simple, applied, general equilibrium) models in the enthusiasm for new work on bench science, need not have forced a radical change in how people on the clinical side did their research or formulated their advice.
As I indicated before, it is quite possible that experts with experience using what Ray Fair suggests that we call the CC (for Cowles Commission) macro models would have been able to offer better clinical advice than any of their counterparts from bench science.
Of course, this leaves open the question of how clinicians establish among themselves who truly is an expert and who isn’t, but this is a problem that has been solved elsewhere so there is no reason why it can’t be solved in macro too.
I have to say that I find it hard to understand how to evaluate the research that someone else does (either clinical or bench science) using computer models that are so complicated that no person understands the internal working of the model. So I’ll read the papers that Ray Fair references with some skepticism. But in fairness to his position, I also have to admit that in computer science, humans work routinely with highly nonlinear models that have far more parameters than any CC model ever did. Moreover, progress in this area seems to be coming at an extremely rapid rate, so it is probably a good idea to keep an open mind.
In biomedicine, some people do only bench science; others only treat patients; a few manage to do both; a more common package includes clinical practice and clinical research.
One of the things that the distinction between clinical work and bench science might encourage is a broader range of research in macro. The advocates for bench science in macro did not simply claim that their research was good. Over time, they seemed to be claiming that any research in a style other than theirs was worthless. This probably did discourage the more eclectic, inductive, grounded-in-the-details type of clinical macroeconomic research that people who built the big macro models used to do.
Of course, this type of work need not make direct use of CC models. For example, I’ve always thought that a nontrivial fraction of the confidence that working economists have in the power of monetary policy derives from Friedman and Schwartz’s Monetary History. And of course, good work along these lines continued — see for example, Romer and Romer (2010). (Yes we have no relation.)
So part of the deal has to be that the bench scientists refrain from getting so full of themselves that they deny the value in other types of work.
The late Donald Stokes made what I think was a compelling case for the importance of what I’m calling clinical research. He called it work in Pasteur’s Quadrant and made the case for it in a short, very readable book with this title.
Here is my translation of the key point that Stokes makes: Curiosity-driven Bohr-style basic research, guided only by some group of academics, is at too great a risk of herding effects that drive the research agenda off into irrelevancy.
The poster child / whipping boy for this risk is string theory in physics, but when we look back, the type of real business cycle theory that thought it could treat the work of Friedman and Schwartz as if it was a history of life on another planet (and just make fun of more recent replications like that of David and Romer and Christie Romer) might take over the top spot.
As a final comment, I suspect that part of what made MIT so influential in macro and international is that it got to the Pasteur’s Quadrant sweet spot. Because of the work done by Samuelson and Solow, and subsequent work at Chicago by people like Buzz Brock (perhaps even Lucas) that Rudi Dornbusch and Stan Fischer brought back to MIT, economists there were ready to take advantage of the power of SAGE models. But students working there also had as role models faculty members like Solow, who had served on the President’s Council of Economic Advisors. At least in some cases, they were directly exposed to real clinical/practical problems facing actual governments. Paul Krugman has written about the effect it had on him as a graduate student to be part of an effort organized by Dick Eckaus and led by Rudi Dornbusch to advise the Central Bank of Portugal in 1976. It would be interesting to know more about who else went on that trip or was involved in related work that Rudi was doing and about the effect it had on their professional careers. And about others like Ken Rogoff, who’s first job was with the Board of Governors of the Fed.