Researchers who conduct animal studies often don't use simple safeguards against biases that have become standard in human clinical trials—or at least they don't report doing so in their scientific papers, making it impossible for readers to ascertain the quality of the work, an analysis of more than 2500 journal articles shows. Such biases, conscious or unconscious, can make candidate medical treatments look better than they actually are, the authors of the analysis warn, and lead to eye-catching results that can't be replicated in larger or more rigorous animal studies—or in human trials.
Neurologist Malcolm MacLeod of the Centre for Clinical Brain Sciences at the University of Edinburgh and his colleagues combed through papers reporting the efficacy of drugs in eight animal disease models and checked whether the authors reported four measures that are widely acknowledged to reduce the risk of bias. First, if there was an experimental group and a control group, were animals randomly assigned to either one? (This makes it impossible for scientists to, say, assign the healthiest mice or rats to a treatment group, which could make a drug look better than it is.) Second, were the researchers who assessed the outcomes of a trial—for instance, the effect of a treatment on an animal's health—blinded to which animal underwent what procedure? Third, did the researchers calculate in advance the sample size needed to show that they didn't just accumulate data until they found something significant? And finally, did they make a statement about their conflicts of interest?
The team's samples of 2672 papers, published between 1992 and 2011, showed that all four of these measures have yet to be adopted widely in animal research, though they are now standard practice in clinical trials. Randomization was reported in only 24.8% of the papers, blinded outcome assessment in 29.5%, a sample size calculation in 0.7%, and a conflict of interest statement was made in 11.5%.
Not reporting bias safeguards in a journal article does not mean they were not put in place. The researchers may not have bothered to mention them, and the journal may not have asked. Conversely, reporting a measure does not guarantee that it was taken; scientists can lie. But past research has shown that papers that mention fewer measures to reduce bias tend to report higher efficacy of candidate drugs, suggesting that written assertions that the safeguards were taken can serve as an indicator for a study's rigor.
Whether the paper was published in a journal with a high impact factor—an often used but controversial indicator of quality—didn't seem to make a difference as to whether bias safeguards were noted, MacLeod and his colleagues write. Nor do top-ranked research institutes do better than the rest—at least not in the United Kingdom. In a separate analysis, the researchers looked at more than 1000 animal research papers produced by the five institutes that performed best in biomedical sciences in the United Kingdom's 2008 Research Assessment Exercise. (They again looked at four bias-reducing measures, although the set was slightly different.) Two-thirds of the papers didn't report on any of the four measures; only a single one of the 1173 papers checked all four boxes.
The good news is that things seem to be getting better: Recent papers mention more antibias measures than older ones. Reporting for randomization, for instance, tripled between 1992 and 2011, from 14% to 42%; reporting for blinded assessment and conflicts of interest also rose sharply.
“This paper does generally show that things have been improving over time, in relation to some (but not all) measures to reduce the risk of bias. But there’s still clearly a big gap to bridge," Kevin McConway, statistician at Open University in Milton Keynes, U.K., said in a statement distributed by the United Kingdom's Science Media Centre (SMC) today. Vicky Robinson, chief executive of the National Centre for the Replacement, Refinement and Reduction of Animals in Research in London, called the study "another wake-up call for the scientific community," in another SMC statement. "There is no excuse or justification for using animals in studies that are poorly designed and can never be reproduced."
But others worry that the study, if wrongly interpreted, could lead to further restrictions on animal research. “This is a very important study but it is equally important that news outlets don't twist the outcomes to undermine animal research," says Chris Chambers, a cognitive neuroscientist at Cardiff University in the United Kingdom. "U.K. animal research is vital across the full spectrum of biomedicine, from understanding the basics of the body and brain through to treating diseases."