Davide Bonazzi/Salzman Art

A solution to psychology’s reproducibility problem just failed its first test

Behavior change is difficult—just ask any psychologist. A new study shows behavior change among psychologists is no different. Efforts to improve the robustness of research by asking psychologists to state their methods and goals ahead of time, a process called preregistration, have stumbled at the first hurdle.

“Preregistration is not as easy as it may seem,” says Aline Claesen, a psychologist at the Catholic University of Leuven (KU Leuven) in Belgium. She and her colleagues examined 27 preregistration plans filed by psychologists from February 2015, when the journal Psychological Science started to offer badges for preregistered studies, to November 2017. In every case, her team reports this month in a preprint on the PsyArXiv server, the researchers deviated from their plan—and in every paper but one, they did not fully disclose these deviations.

“I was totally surprised by how many of these [changes] were undisclosed,” says Wolf Vanpaemel, a psychologist on the KU Leuven team. “There’s no good excuse for not transparently indicating all changes you made.”

As part of an effort to lessen the field’s reproducibility problems, psychology picked up the idea of preregistration from clinical research, where it has been the norm for more than a decade. By setting out, for example, the number of volunteers that will be recruited and the criteria that will be used to analyze the data, preregistration is intended to make research more transparent and reduce both the temptation to fish for significant results and the opportunity for bias. More than 27,000 such plans from various fields are lodged with the Open Science Framework, up from 12,000 in 2017. And ClinicalTrials.gov holds more than 250,000.

The researchers say that in some cases, plan changes make sense because unforeseen problems with the method can become clear during a study. The team members argue, however, that not disclosing deviations can raise suspicions, although they are not suggesting the papers they examined are unreliable.

For example, one of the most common deviations the KU Leuven team noted was in sample size. Preregistration is supposed to crack down on “optional stopping,” in which researchers recruit subjects until they have data that support their hypothesis. The authors of one Psychological Science study wrote in their preregistration that they “expect to sample 600 participants” but then reported 616 participants in the published paper. This small increase “leaves open the possibility that the authors stopped data collection at 600 participants and then used optional stopping to arrive at a favorable outcome with 616 participants,” the preprint warns.

Unmet plans

Of 27 studies in a psychology journal, just one followed its preregistered plan to a T. Researchers identified deviations from plans in eight categories.

All deviations disclosedUndisclosed deviationsNo deviations
Hypothesis/
research question
3519
Variables0423
Direction of effect0621
Operational-
ization of variables
4320
Sample size51012
Exclusion criteria3159
Procedure2124
Statistical model6138

The lack of transparency is troubling, but understandable, Vanpaemel says: Some researchers might fear their paper won’t be published if they admit to not having entirely followed their preregistration. “As soon as we see more papers being published [with] transparent changes, these concerns will be hopefully lessened.”

Steve Lindsay of the University of Victoria in Canada who is also editor-in-chief of Psychological Science admits that he has given authors plenty of leeway to write vague preregistrations and not account for all the deviations in a paper. He says policing the system would take time and effort the journal hasn’t budgeted for. But, he adds, there has been “modest improvement” in the preregistration process at the journal since the study was conducted.

And Dan Simons, a psychologist at the University of Illinois inChampaign, describes the identified shortcomings as growing pains. “My guess is that most [authors] were well-intentioned and just didn’t know how to do it very well.” Clinical sciences still wrestle with noncompliance and lack of transparency even with many years of experience, he points out.

More psychologists may be persuaded to adopt preregistration thanks to the badges some journals offer as well as pledges to publish a paper regardless of results when a study is preregistered, says Anna van’t Veer of the Leiden Institute for Brain and Cognition in the Netherlands. “We all like badges,” she says. “And at some journals, it looks a little bleak if you don’t have them.” Now, she says, the community needs to shift from simply doing preregistration to doing it well.

Brian Nosek, a psychologist at the University of Virginia in Charlottesville who directs the Center for Open Science, which runs the Open Science Framework, says the KU Leuven team’s findings should help. “The key message here,” he says, “is that preregistration is a skill and not a bureaucratic process.”