Read our COVID-19 research and news.

The new study may help explain why so many encouraging results in animal studies don’t hold up in human trials.

Study questions animal data underlying many clinical trials

If you sign up to take part in a clinical trial, you can trust that scientists have solid evidence from animal studies that the test drug might work, right? After all, a study based on flimsy evidence would needlessly put your health at risk and potentially waste money.

But your trust might be misguided, according to a new study. So-called investigator brochures—the documents that researchers produce to convince regulatory agencies and ethical review boards that a proposed trial is worth the risk—are often lacking crucial information about the efficacy of the proposed therapy in animal models. As a result, it’s often impossible to tell how good the evidence is.

“This is incredibly alarming,” says Shai Silberberg, director of research quality at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, who was not involved in the study. The work "shows that decision-makers for ethics related to clinical trials don't get the information they really need to evaluate those preclinical trials.”

Over the past few years, researchers have repeatedly shown that many animal studies lack scientific rigor; they are often prone to biases, for instance, and are sloppily reported in scientific journals. The team behind the new study looked specifically at the information researchers prepare to justify clinical trials in humans, which can cost millions of dollars.

Investigator brochures are the main source of information for institutional review boards (IRBs), the ethical panels that greenlight trials. IRBs are also included, along with many pages of other information, in drug trial applications with regulatory agencies such as the U.S. Food and Drug Administration (FDA) in Silver Spring, Maryland. The brochures contain information about toxicology, pharmacology, and animal safety studies used to determine the potential risk a particular therapy may pose to humans, but they also include efficacy studies done in animal models to demonstrate a therapy’s potential benefit.

Investigator brochures aren't public documents, but a team led by Daniel Strech of the Centre for Ethics and Law in the Life Sciences at Hannover Medical School in Germany asked the chairs of six IRB's at German medical faculties whether they could see them, provided they signed confidentiality agreements and promised to report their findings in a way that would not identify individual trial sponsors, researchers, or candidate drugs.

Three IRB chairpersons agreed to collaborate, giving the researchers access to 109 investigator brochures from trials approved between 2010 and 2016. Together, they included 708 efficacy studies in animals. The researchers read all of those studies, looking for things such as appropriate control groups and whether a sufficient number of animals had been used. They also checked whether the studies had been published in peer-reviewed journals.

The team found that 89% of the animal studies were not published at all, making it impossible for the IRBs to know whether the study had been reviewed by other experts. Additionally, fewer than 5% included important information on whether bias-reducing methods such as randomization of the experimental groups were used, they report today in PLOS Biology. (This could mean that the studies weren't set up to avoid bias, or that the measures were not reported in the investigator brochures, for instance to keep the packets concise.)

Lastly, 82% of the brochures only reported studies that had positive effects. That suggests that trial sponsors leave out the less flattering studies, Strech says. Even with promising drug candidates, there are often studies that show negative effects, for instance if the dose was too low or the timing of the drug’s administration wasn’t ideal. Silberberg says that this finding is less surprising, however, because “If the studies aren’t positive, then you wouldn’t go to an IRB to do a clinical trial.”

Strech says he's “surprised" that IRBs and regulatory agencies accept the current situation. "Why is nobody complaining about this?”

Spencer Hey, a bioethics researcher at Brigham and Women’s Hospital and the Center for Bioethics at Harvard Medical School in Boston, agrees. “If the [brochures] do not adequately describe the studies or do not provide adequate context that would allow the IRBs to make a sound judgment about their significance for human testing, then this is a serious problem,” he says. (Hey was not involved in the study, but he did complete postdoctoral training with two of the researchers on the paper.)

Although all the brochures came from German IRBs, the team believes the situation is likely to be similar throughout Europe and the United States.

FDA declined to comment. The agency "generally doesn't comment on specific studies, but evaluates them as part of the body of evidence to further our understanding about a particular issue and assist in our mission to protect public health,” a spokesperson wrote in an email.

It's not clear why investigator brochures often lack data, Strech says. Some companies may prefer to keep information confidential because they worry about the competition, or they may believe that animal efficacy studies are too complex for most IRB members—which usually include nonscientists—to review. IRBs may assume that no sponsor would invest in a clinical trial without convincing evidence, and give the companies the benefit of the doubt rather than analyzing the animal studies thoroughly. But there could be reasons for companies to push trials without proper evidence, says Strech, such as wanting to keep shareholders happy.

Strech says he plans to reach out to regulatory agencies, pharmaceutical companies, and clinical investigators to begin discussions about how to improve investigator brochures. If some studies cannot be peer reviewed because they must be confidential, then at least IRB members should get better training to evaluate them, he says.

The new study helps explain why so many results in animal studies don't hold up in human trials, says Malcolm Macleod, a neurologist at the University of Edinburgh. Less than 10% to 15% of clinical trials are successful. “Improving the design, conduct, and reporting of preclinical research is of the utmost priority if we are to move ahead, with all due haste, in the development of novel treatments across a range of diseases.”