What should the prudent scientist focus on next? The boring but safe experiment that's sure to generate a paper or the moonshot experiment that will probably lead nowhere, but could reap big rewards if it succeeds?
It's a question that dogs researchers who only have so much time and grant money to spend. In a bid to find an answer, a study published today in the Proceedings of the National Academy of Sciences takes a look at some hard numbers. And although the researchers don’t offer a sure-fire recipe for success, they do offer some food for thought.
The study examines risk and reward in biomedical science. As a proxy for the process of discovery, they focus on the ever-growing knowledge of how the huge number of molecules involved in biological pathways function and interact. The researchers sifted through millions of U.S. patents as well as abstracts of papers indexed in MEDLINE to find all references to the chemicals of life, including well-known molecules such as DNA and RNA, and obscure metabolites, neuropeptides, and hormones.
Then they converted those data into a network map of biochemical knowledge, tracking research on 50,000 molecules since 1980. If two molecules are usually mentioned together in a paper, then they are linked in the map; if the two molecules are never mentioned in the same paper, they are not.
Over time, scientists have discovered a growing number of connections between molecules. But just how efficient have scientists been at uncovering that complicated network? Finding out required a supercomputer, a mathematical model of the process of discovery, and running simulations of many different possible choices about which experiments to conduct, and their outcomes.
In general, the study concludes that over time the field has become much less efficient; new studies were less likely to reveal new parts of the biochemical network. A more optimal strategy, the findings suggest, would have involved researchers taking greater risks as the field matured.
ScienceInsider spoke about the findings with the study’s two lead authors, Andrey Rzhetsky and James Evans, a computational biologist and sociologist, respectively, at the University of Chicago in Illinois. This interview has been edited for brevity and clarity.
Q: So just how inefficient are we?
Andrey Rzhetsky: Scientists in this field were about 10-fold less efficient than they could have been.
James Evans: However, that's compared to a scenario where every scientist followed the optimum risk strategy, and science had full disclosure, where the results of every experiment were published, including all negative results.
Q: What would have been the optimum risk strategy?
A.R.: It depends on the maturity of the scientific field. Early on, when a field is new, like molecular biology in the 1950s, scientists were very efficient. Almost every experiment produced a new discovery. And it became more efficient as experiments built off each other. We found that the efficiency peaked around the 1980s, when about 13% of [today's known biochemical] network was discovered. But as the field matured, scientists became more and more inefficient.
Q: What does that inefficiency look like?
J.E.: You get certain questions asked over and over again. The main source [of inefficiency] is the enormous duplication of effort, pursuing connections that are not going to pay off in new discoveries. As molecular biology matured [from the 1990s onward], scientists should have been taking more and more risks to find the more obscure connections between molecules. Instead, they kept on exploring connections between well-known central molecules.
A.R.: For example, research on [the protein] p53 in cancer research. There's a whole field focused on [p53], with something like 60,000 papers! It's like a black hole for researchers. They become trapped.
Q: So how do we escape?
J.E.: Science isn't going to fix itself. The incentives have to change. We can't ask researchers to just take on more risk. Their incentive is to get papers published, get cited, and win tenure so they can feed their families. The change has to come from the big funders. [The National Science Foundation and National Institutes of Health] are less likely to give grants for work that is not already supported by lots of previous work and is therefore likely to succeed. They are starting to change that by giving awards to individuals rather than projects. Researchers who get that kind of funding tend to take more risks.
A.R.: And it's a completely different story if journals accept negative results. Just changing that [would produce] a huge increase in the efficiency of discovery.
Q: Is there nothing that individual researchers can do?
A.R.: Scientists should take on more risk in general. The goal is to do work that gets cited, so you have to make discoveries. [One strategy would be] to break off to a new field or subfield where there is more to discover. For example, nanotechnology or synthetic biology. Fields where new journals are being created. People are better off in young fields rather than trying to improve old ones.
J.E.: Also, scientists who lead large groups or research institutions are in a position to act on this now. In the heyday of Bell Labs, for example, researchers were encouraged to do some crazy stuff. It was high-risk but it paid off enormously for their group, and for science. What we're finding is that if you want discoveries in the mature stages of a field, you need a research portfolio that includes more risk. You have to think like a venture capitalist.