How Science Changes

An interview with Samuel Arbesman, author of The Half-Life of Facts

shutterstock_85363138-flip-edited.jpg
Dim Dimich/Shutterstock/Rebecca J. Rosen

In 1947 a mathematician named Derek J. de Solla Price came to Raffles College in Singapore to teach. He did not, Samuel Arbesman writes in his new book, "intend to spearhead an entirely new way of looking at science." But as you probably guessed, that's exactly what he did, or we would not be retelling his story here.

half-life_of_facts.jpg

Price's discovery came when construction on the college library forced him to take a complete set of Philosophical Transactions, published by the Royal Society of London since 1665, back to his dormitory. At home, he stacked the journals up chronologically against a wall, with each pile sorted by year. "One day," Arbesman writes, "while idly looking at this large collection of books that the library had foisted upon him, he realized that the heights of the piles of these bound volumes weren't all the same. But their heights weren't random either. Instead, he realized, the heights of the volumes fit a specific mathematical shape: an exponential curve."

It is this insight that opened up the field of scientometrics, which is the topic of Arbesman's new  book, The Half-Life of Facts, published this fall by Current books. I asked Arbesman a few questions about his book -- what it means to quantify scientific learning, and what we can really learn from this sort of study -- and he sent me his answers (via email) to share with you below.

What is scientometrics and what is meant by the title of your book The Half-Life of Facts?

Arbesman: Scientometrics is the quantitative measurement and study of science. Or, more generally, it's the science of science, which ranges from understanding how the number of journals grows, to how scientific collaboration works, to even how scientists receive grants. It is part of the larger body of research devoted to understanding more broadly how knowledge grows, changes, and gets overturned over time.

While many of us know that knowledge changes (such as what is currently considered nutritious, for example) and that any individual fact might seem to shift with a certain degree of randomness, the change in what we know overall obeys mathematical regularities. A powerful analogy is to radioactivity. While we can't predict when a specific atom is going to decay, when we have lots of atoms together -- such as in a chunk of uranium -- suddenly we go from the unpredictable to the systematic and the regular. And we can even encapsulate this decay in a single number: the half-life of the radioactive material.

The same kind of thing occurs with knowledge: Even if we can't predict which new discovery is going to occur or which fact is going to be overturned, facts are far from random in the aggregate. Instead, there is an order and a regularity to how knowledge changes over time. And this awareness and understanding is what the "The Half-Life of Facts" is meant to convey.

It seems that in thinking about knowledge and its evolution, we can create a sort of taxonomy of different kinds of factual changes. Sometimes information we have is plain disproved by science, sometimes it is filled out so that we understand how it works better, and sometimes it is refined in such a way that it gains nuance, more accurately describing the world around us. Are the *kinds* of factual changes that we see changing as science advances? Does this vary across scientific fields?

Arbesman: The path of science is one of getting ever closer to the truth. So in general, as knowledge changes, and even becomes overturned, we are approaching a better understanding of the world around us. And I think this cuts across scientific disciplines. Isaac Asimov made this point eloquently:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

In fact, the correct view of the world is a type of geometric object known as an oblate spheroid. That being said, a perfect sphere is a decent approximation, and a flat Earth is not as good (though fine for very small distances). Even though these theories might give us a totally different mental picture, it turns out we can quantify the amount of error for each successive view of the world, and place them on a continuum. And I think this is a way to unify knowledge change, rather than distinguish the types. Overall we are improving our ability to measure and understand our surroundings.

So I think for science overall, across scientific fields, even though there might be flux in our knowledge (which foods are healthy or unhealthy, or what we think dinosaurs look like), this flux is part of the process of improving our understanding of the universe, and often goes hand-in-hand with improvements in our tools and technologies, and our ability to accurately measure our environment.

You write in the book, "What we're really dealing with is the long tail of discovery." What do you mean by this? What are the implications for researchers, institutions, and funding?

Arbesman: When a scientific field is new, there are lot of "low-hanging fruit" to be discovered and discoveries can come quickly and easily. As time passes and a field becomes more mature, successive discoveries are often less seminal or earth-shattering, yet are still important in fleshing out our understanding of the discipline. A small number of discoveries can explain a great deal of the phenomena in a single field, but we still need to examine the ideas and knowledge far out in what I term "the long tail of discovery" in order to have a true picture and understanding of the field.

I'm sure there is one perspective that funders might not want to expend so much effort for this long tail, but I don't think that's the right way to view this. The discoveries out in the long tail ensure that we have an unbiased depiction of the world, and see our surroundings in all of its rich variety and complexity. Therefore, researchers, institutions, and funders should continue working hard to elaborate this long tail of discovery.

What areas of science would you point to where we can expect to see the most rapid developments in the next generation or two? If the development of knowledge proceeds in sometimes predictable ways, can that help us guide funding? Could we use these models to help us find areas that are most due for a breakthrough or other significant achievement? What kinds of factors/constraints can we not account for?

Arbesman: I think the areas that are poised for the most rapid developments are at the boundaries of traditional domains. As science grows, traditional fields become increasingly disconnected from each other. But as scientists can bridge these gaps, connecting one field to another by importing ideas or recognizing similarities in problems, we can get massive developments. Combine computer science with biology and you get the many fields that are loosely grouped under computational biology, and this is one of the fastest-moving fields. Even the field of digital humanities (combining computational and digital tools with the humanities) is poised for rapid growth.

More generally, there are numerous scientists who are developing tools and techniques to predict the growth and birth of scientific fields. For example, work by Carl Bergstrom's lab has examined how new fields develop, such as neuroscience, and Katy Börner's research group has also explored the evolution of research areas. Funding agencies are becoming increasingly interested in this kind of research, as it can help guide funding. For example, the National Science Foundation has a program called Science of Science and Innovation Policy devoted to this.

That being said, it is not easy to predict or determine a breakthrough or significant achievement. But we can sometimes make some headway. For instance, sometimes these kinds of problems can be quantified and smaller steadier discoveries can be viewed as steps along a path towards larger discoveries. For example, discovering the first potentially Earth-like planet can be viewed this way and predictions are possible in this area. Other problems, such as determining when a long unsolved mathematical conjecture will be solved, are more difficult, but probabilistic estimates can still be made.

Rebecca J. Rosen is a senior editor at The Atlantic, where she oversees coverage of American constitutional law and government in the Battle for the Constitution series.