Read our COVID-19 research and news.

Two new projects reanalyzing clinical trial data aim to encourage transparency in clinical research.

Davide Bonazzi/Salzman Art

Reanalyzing drug trials in depression, chronic pain aims to unearth new data

Concerned that reports of clinical trials can exaggerate a treatment’s benefits and downplay its risks, two research groups will sift through data from tests of drugs involving thousands of people with chronic pain or depression. The question for those reanalyzing the data is whether dozens of papers on the trials’ outcomes painted a complete picture, and what details may have gone unmentioned about the drugs’ effects.

“Bias and spin are incredibly common in the publication of clinical trials,” says Peter Doshi, a health services researcher at the University of Maryland School of Pharmacy in Baltimore and an editor at The BMJ. He and others have long been concerned about how trial results wend their way to doctors and patients, and whether both groups are fully informed when considering a specific medication. So in 2013 Doshi formed RIAT, which stands for Restoring Invisible and Abandoned Trials. It sought to encourage researchers to obtain unpublished clinical trial information, which would include deidentified patient-level data. With those data in hand, the researchers could reanalyze a trial’s results and publish what they found, which may or may not jibe with the original reports.

In 2017, RIAT received $1.4 million from the Laura and John Arnold Foundation (now called Arnold Ventures), and last year awarded its first grant for clinical trial reanalysis. It provided $150,000 to look at a U.S. government–funded trial of antidepressants in more than 300 teenagers. That trial’s reporting had been criticized in part because some arms of the study were unblinded and because it described outcomes that it didn’t originally set out to analyze—called post hoc analysis. Trial investigators reported that taking Prozac alone or combining it with cognitive-behavioral therapy eased depression, and the combination strategy reduced suicidality. “This trial is much more influential than any other antidepressant trial in kids,” says child psychiatrist Jon Jureidini at Women’s and Children’s Hospital in Adelaide, Australia, who is leading the reanalysis. Jureidini acknowledges that even if his reanalysis turns up something new, changing practice patterns may be tough. “But that has to be our target,” he says.

This week, RIAT is announcing its second and third grants, each for another $150,000; next year, it will award three more.

“We know there’s tons of information in these unpublished data sources that doesn’t appear anywhere in the public record,” says Evan Mayo-Wilson, an epidemiologist at Indiana University School of Public Health in Bloomington. With RIAT funding, Mayo-Wilson will reanalyze six trials that took place from 1997 to 2005, which include more than 1700 people, testing the drug gabapentin against neuropathic pain, a type of chronic pain caused by nerve damage. Gabapentin was first approved for epilepsy, but after maker Pfizer tested it in neuropathic pain, the company became embroiled in litigation in the early 2000s. Patients and third-party payers, such as insurance companies, charged that it had overstated gabapentin’s benefits. As litigation continued and Pfizer was required to release documents about its trials, Mayo-Wilson says he and others “realized that journal articles overstated [gabapentin’s] benefits and understated its harms.” Gabapentin was later approved for some forms of neuropathic pain and is widely prescribed for such pain generally.

Although Mayo-Wilson has already studied how gabapentin’s harms were reported in sources such as journal articles, his new project will go much deeper. “We haven’t yet tried to determine which harms were really caused by gabapentin in these six trials,” he says. He plans to analyze potential side effects at a granular level—information he has access to because Pfizer released it during the litigation. This means sorting through every potential adverse effect reported by trial participants, some of which may not have made it into publications. It also means grouping adverse events by the body system they affect. “In these trials there might be 200 or 300 different adverse events that people report,” Mayo-Wilson says. Instead of offering a list of so many possibilities, he wants to ask, “What are the most important adverse effects to the patient?”

The second project tackles STAR*D, or Sequenced Treatment Alternatives to Relieve Depression. It was funded by the National Institute of Mental Health (NIMH) and tested 11 antidepressant drugs in more than 4000 people with depression from 2001 to 2006. STAR*D aimed to determine how to help people for whom one or more antidepressants didn’t work.

Ed Pigott, a psychologist in Juno Beach, Florida, who’s leading the STAR*D reanalysis, learned about STAR*D in 2006, when he read an article in The Washington Post on its initial results. As more STAR*D papers were published—the number now exceeds 120—Pigott determined that people who dropped out of the trial weren’t counted in the analysis, and that others’ treatment was declared successful prematurely. Secondary outcomes including social functioning and health care utilization among participants were never published, he says. Although it’s common, Pigott says, for “negative findings [to] get buried,” he was drawn to STAR*D in part because at $35 million, it was the largest and most expensive antidepressant trial ever conducted. Pigott published some of his concerns about STAR*D’s reporting in medical journals. “I’m kind of like a rat in a maze,” he says. “If I find a little piece of cheese I just keep digging and digging.”

Without unpublished patient-level data, he was limited to poring over published information. However, because STAR*D was funded by NIMH, Pigott was in luck: At the time, the institute made patient-level data available to qualified researchers 5 years after a trial ends. Pigott teamed up with several academic researchers, including his co–lead investigator Jay Amsterdam, a psychiatrist at the University of Pennsylvania, to analyze the study’s vast data set.

The original investigators never publicly disputed Pigott’s critiques, he says. Three senior STAR*D investigators contacted for this story either said they were unavailable to comment or did not respond.

Pigott, Mayo-Wilson, and Jureidini are focusing on older trials but argue that all have had a major impact in medicine. “It’s an enormous amount of work,” Doshi says. “We’re talking about doing something that hasn’t traditionally been done.” A paper published in 2014 in JAMA, led by epidemiologist John Ioannidis at Stanford University in Palo Alto, California, underscored how unusual reanalyses are: The authors could find only 37 examples of patient-level data reanalyzed across the medical literature. Only five were, like RIAT’s projects, performed by independent authors. Just over one-third of the 37 reanalyses led to new interpretations of the data.

“Reanalysis can do a lot of good,” says Arnold Monto, an infectious disease epidemiologist at the University of Michigan in Ann Arbor. But, he adds, those conducting reanalyses must be mindful of their own biases. “The danger there is you’re looking for something, you’re looking for something that is publishable,” he says.

Still, Mayo-Wilson, Pigott, and Jureidini hope their work encourages transparency across clinical research. “I would hope it puts a shot across the bow of other researchers,” Pigott says, “that you know what, there are going to be crazy people like me out there in the world” who will take on a job like this.