Why Information Grows
Anybody interested in the future of mathematical theory in economics should read Cesar Hidalgo’s book Why Information Grows. There are many things to like about this lucid account of the evolution of our scientific understanding of information. One of the most important may be the simplest. It illustrates what it means to think like a physicist.
Thinking like a physicist is very different from using such tools from physics as partial differential equations. In fact, many of the economists who seem most interested in the tools of physics seem least inclined to think like a physicist.
Hidalgo’s account illustrates this mindset by describing the work of a few specific physicists–Boltzman, Schrödinger, and Prigogine–but also by showing how the author, who has a Ph.D. in physics, thinks about the big question he addresses. (As an aside, Hidalgo’s summary of Prigogine’s work is wonderful, the first one that has ever made any sense to me.)
The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.
When Schrödinger asked in the 1940s how a germ cell stores genetic information, he reasoned that both a liquid and an amorphous solid had physical structures that were too irregular to be useful for storing information. On the other hand, a crystal was too regular and too hard to change. In particular, it could not be altered by x-ray radiation, which was known to cause changes in the traits that an offspring inherits. So he suggested that the information in a gene might be stored in a linear aperiodic (or irregular) crystal. In making this case, he was building on ideas that other physicists (most notably Max Delbrück) had advanced, but his presentation in a short book was widely read and influential. Crick and Watson were among his readers. They eventually showed that DNA has precisely the structure that he and Delbrück had proposed.
What Schrödinger did not do was develop an “as if …” theory. He could have said it is “as if there is a little book inside the germ cell that has all the genetic information written on its pages …” He could have calibrated a “little book model” and claimed success if he replicated some facts about heredity. But when Delbrück and Schrödinger theorized about information storage in a germ cell, they immersed themselves in the specificity of the cell and they took it seriously, even as they relied on the abstraction of information theory. Together, they contributed an important part of the foundation that allowed remarkable progress in the scientific understanding of genetics and molecular biology.
With this in mind, look again at models of growth in the stock of knowledge in the spirit of Lucas [2009]. These models assume that different people have different levels of human capital that shows up in different levels of productivity. People bump into others with a fixed probability per unit time. If X bumps into Y, who has higher productivity, X acquires Y’s human capital without any consent or action by Y.
As theory, this is about as far away as one can possibly get from the specificity that Schrödinger (or I gather, Hidalgo himself) would apply to an analysis of person-to-person transfers of information. They would reason as follows:
- Information has to be stored somewhere.
- The knowledge that economists call human capital has to be stored in the neural connections in the brain.
- We know from the physics of the brain and of human sensory receptors that it is impossible for a person to have direct access to the knowledge stored as neural connections in someone else’s brain.
- As long as people have enough control over their own bodies to avoid torture and other forms of coercion, it is literally impossible for there to be an involuntary transfer of knowledge stored in neurons.
- To send information that is stored in neurons, the sender must first convert it into a form that can be picked up by someone else’s sensory receptors; that is, into something that the recipient can hear, see, taste, smell, or feel.
- To understand each other, the sender and the receiver must have already agreed on how mental concepts are coded.
- It takes time for a sender to encode knowledge.
- The rate at which someone can encode knowledge improves with practice.
- It takes time for a recipient to internalize the information by converting it back into connections between neurons.
- Internalizing information is also a skill that improves with practice.
- The process of converting knowledge from neurons into a codified form and back to neurons takes a nontrivial amount of time for the average type of knowledge; and there is substantial variation in this cost across different types of knowledge. For some types of knowledge the cost is effectively infinite.
- The coding and internalizing steps in the transmission of information are ambiguous and error prone, so feedback from receiver to sender and iteration may be inherent parts of an error correction process required for faithful transmission.
- Once people have invested in a shared spoken language and codified it as written text, the cost of storing and copying information that has already been codified falls relative to the cost of codifying and internalizing.
- Technological advances (such as printing ink on paper, recording speech and action, and displaying text on a computer screen) have made the cost of copying codified information vanishingly small compared to the cost of coding, internalizing, and doing error correction.
- As far fetched as it might seem, focussing as a physicist would on the specifics of communication between two people yields immediate payoffs in the form of guidance for economists about how to model both human capital and codified information:
- There is a crucial distinction between human capital (stored in neurons), and codified information (stored in some external form, such as printed text or bits on a hard drive.)
- Anything stored in neurons is a rival good.
- A person’s human capital is fully excludable as long as people have legal control over their own bodies. So there are no human capital “spillovers” and no human capital “externalities.”
- As the cost of copying codified information goes to zero, it becomes a pure nonrival good.
- Someone can exclude others from using codified information by restricting access to the artifacts it is stored in.
- Someone may also be able to exclude others from using codified information if the institutional arrangements grant enforceable property rights.
- If there is limited excludability for codified information, some people may receive unearned benefits that resemble spillovers.
- The default assumption in any growth model should therefore be that human capital is a rival good and that codified information is a distinct, partially excludable, nonrival good.
- Because shared information is more valuable, people will invest in the specific types of human capital that facilitate the coding and internalizing steps that convert knowledge from neural connections in one brain into neural connections in another brain.
- Technologies that lower the cost of storing and reproducing codified information are complements with the human capital for codifying and internalizing information. Together, these technologies and complementary investments in human capital can radically accelerate the rate of diffusion of the types of information that can be codified at low to moderate cost.
Thinking like a physicist is easy. Anybody can learn to do it. It is much easier than solving partial differential equations. Why, nevertheless, do so many economists refuse to take advantage of the power offered by its commitment to extreme specificity? Why is there no professional return investing in the clarity that it would bring to their models?