Read our COVID-19 research and news.

New guidelines call for researchers to report experiment details, such as animals’ housing and food, which can have big effects on reproducibility.


Journals endorse new checklist to clean up sloppy animal research

Animal research is facing a crisis: Up to 89% of all preclinical research—which includes animal research—can’t be replicated, according to a 2015 analysis, often because researchers fail to describe basic details of the experimental setup. This calls into question the validity of the findings, says Nathalie Percie du Sert, who works on improving animal research at the U.K. National Centre for the Replacement, Refinement, and Reduction of Animals in Research. “If you can’t do anything with the results,” she says, “what’s the point of the study in the first place?”

To address the problem of poor reporting, Percie du Sert and a team of researchers have developed a checklist of 10 critical details each animal study needs to report, such as the number of animals used, their sex, whether they were randomly allocated to a test group and control group, and whether the researchers knew which animal was in which group. “ARRIVE 2.0,” published today in seven scientific journals, is a streamlined version of an earlier set of guidelines that were published in 2010. Despite being endorsed by more than 1000 journals, those guidelines have largely been ignored by researchers.

“It’s really great that they’ve updated it,” says David Moher, a publication scientist at the Ottawa Hospital Research Institute. But, he says, “I wonder how it will go down.” Endorsement of the guidelines is not enough, Moher says; journals and reviewers will need to enforce their use. And even enforcement is tricky: A trial of the first set of ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines found that scientists who were told they must fill out a checklist showed no real improvement in their experimental reporting compared with a control group that was simply asked to use the checklist.

Based on research showing success with shorter checklists, Percie du Sert and her colleagues set out to winnow down the 38 items in the original ARRIVE checklist to something less onerous. The result is the “Essential 10” list in ARRIVE 2.0. A “Recommended Set” includes 11 items of secondary importance, such as details about the animals’ housing and a declaration of conflicts of interest. Survey results suggested researchers may not have understood exactly what was required in the old guidelines, so the team also published a detailed explanation of what should be included for each item. (For instance, if there was no control group, researchers should say that explicitly rather than leaving the item blank.)

All 21 items are actually essential to report, says Malcolm Macleod, a metaresearcher at the University of Edinburgh and one of the developers of ARRIVE 2.0. But the Essential 10 probably have the biggest impact on the credibility of the literature, he says. If researchers fail to report them, “I can’t even come close to working out whether the results are credible.”

Researchers may have little incentive to abide by the guidelines, however. “Are we rewarding them?” Moher asks. “The obvious answer is no, we’re not.” Compliance with reporting guidelines should be rewarded in decisions about career advancement, he says. Macleod and Percie du Sert hope journal editors will also provide researchers with more of an incentive, simply by refusing to publish papers that don’t comply.

But enforcement isn’t as simple as it sounds, says Hayley Henderson, chief editor at BMC Veterinary Research, one of the journals publishing the ARRIVE 2.0 guidelines today. She often receives checklists that haven’t been properly completed, or that don’t match what is actually reported in a paper, making it difficult and time consuming to evaluate the research. Language barriers and researchers’ fear of disqualifying their manuscripts from being published both play a role. The updated guidelines are “much easier to digest,” she says, which should improve uptake, but there’s more work needed before they can be mandated by all journals, she says.

There’s good evidence that well-constructed checklists can improve the quality of reporting, Moher says, so journal editors should think of them as a simple medical intervention: “We know it makes you better. It won’t make it worse. There are no harms associated with this.”

Note: Cathleen O’Grady recently completed a degree at the University of Edinburgh, Macleod’s institution.