Ecologists are testing more and more hypotheses, but their studies are explaining less of the world. That’s the striking conclusion of a new study that analyzes 8 decades of research papers. What exactly is driving these trends isn’t clear, but researchers fear it could undermine confidence in ecological research.
Since it gained momentum as a formal field of study in the 1800s, ecology has focused on understanding interactions among organisms and their environments. Ecologists have made major contributions to shaping modern views of how the natural world works, from documenting competition and cooperation in nature to clarifying the valuable services that ecosystems can provide to humans, such as purifying water or buffering storms and floods. As in many sciences, however, the field has become less descriptive and more quantitative as it matured.
The idea for the new study came during a lab retreat by graduate students at McGill University in Montreal, Canada. Many felt frustrated. When students submitted research papers to journals, they were always asked by reviewers to provide more P values, a measure of statistical confidence that a result is not due to chance. “Our supervisors said, ‘It wasn’t always like this,’ ” recalls ecologist Etienne Low-Décarie, who is now at the University of Essex in the United Kingdom.
To see that trend for themselves, Low-Décarie and two fellow graduate students first downloaded 18,076 articles, dating back to 1913, from three journals that cover a range of ecological research: the Journal of Ecology, the Journal of Animal Ecology, and Ecology. Then they set up a computer program to search the papers for two key statistics, P and R2. The latter is a measure of how much variability in a data set can be explained or predicted by a given factor. For example, the amount of phosphorus in a lake is a good predictor of how much algae will grow there.
The average number of P values per paper has been steadily rising, they found. A typical paper now reports 10 P values, double the number from the 1980s. This suggests that researchers are conducting more experiments than before or exploring more variables. In other words, ecological research is getting more complex. A scientist trying to predict algal blooms probably has an equation that considers not just phosphorus levels, but also temperature, water clarity, and many other factors.
But the proliferation of P values (which is happening in many fields) concerns statisticians, because the value by itself doesn’t say anything about the size of the effect or its biological significance. “You can get quite trivial findings” that have robust P values, Low-Décarie says. Nearly half of all papers in the database that reported a P value, for example, did not appear to include other statistics that would clarify for readers whether the result had a major ecological impact. In addition, the more P values that are calculated, the higher the odds that any given result will appear to be significant even if it’s just the result of chance.
The researchers were more surprised and dismayed to discover that R2—a more informative statistical indicator—has been on the decline. In 1980, the average R2 reported in papers was about 0.7. By 2010, it had fallen to just under 0.5, they report online this month in Frontiers in Ecology and the Environment. “That was really surprising to me,” says Brian McGill, an ecologist at the University of Maine, Orono, who was not involved in the research. The average R2 should be increasing, he says, because more variables are being included in ecological models, which ought to make them more accurate.
Co-author Monica Granados of McGill University says that when she gave a talk about the findings to a standing-room-only crowd this month at the annual meeting of the Ecological Society of America in Sacramento, California, the audience was concerned about the drop. “I was nervous, because it’s a critique of your peers and your field,” she says.
What’s going on? One possibility is that ecologists have picked the low-hanging fruit; now that they have published on the most straightforward phenomena, researchers are tackling harder questions. Alternatively, standards could be lower; facing more pressure to publish, ecologists may have become more willing to include lower values for R2. A previous study estimated that the R2 of most ecological research ranges was between 0.02 and 0.05, but few researchers will publish results with such small explanatory power. “The reason we don’t is that we’re afraid it’s going to make us look bad,” says McGill, who has blogged about the need to improve statistics in ecology.
A loss of confidence in ecological research could ripple beyond the scientific community. Policymakers around the world have become increasingly open to shaping policies based on ecological findings, and ecologists have been pressing to make their work even more relevant and useful to decision-makers.
Still, McGill is fairly sanguine about the prospects. As a field, ecology is where weather prediction was a few decades ago, he says. Meteorologists continued to make forecasts, however lousy, but they measured how bad they were. “It’s good discipline and it’s how science advances,” he says. Eventually, new techniques and tools enabled the field to improve.