Big research collaborations have become common—think Human Genome Project, Mars rovers, the new BRAIN Initiative—but they are almost unknown in psychology. Most psychological experiments are carried out by a single lab group, often just a few researchers. But several collaborations that span dozens of psychology laboratories around the world have recently formed. Their goal is nothing short of testing the reproducibility of psychological science. The first significant result from one of those alliances was released this week, and psychologists are breathing a sigh of relief that their field came through with relatively minor blemishes—10 of 13 experimental results were replicated.
Reproducibility is a mantra in science. For most types of research, if an experimental result can't be reproduced by another lab, then its credibility is undermined. Fail to reproduce in multiple labs and the original result is dismissed. Testing the reproducibility of experiments is crucial for cleaning out scientific errors, flukes, and fraud. But science doesn't run as efficient a cleaning service as it could. Researchers are given almost no professional incentive to repeat the work of others, let alone report failures to repeat their own experiments.
Now, motivated by several recent high-profile frauds and an overall concern that many of their field’s results aren’t trustworthy, some experimental psychologists are doing an audit. The one announced this week started with a trio: Brian Nosek at the University of Virginia in Charlottesville, Kate Ratliff at the University of Florida in Gainesville, and Ratliff's Ph.D. student Rick Klein. Nosek has been at the forefront of efforts to clean up his field—he and more than 175 collaborators are repeating a random sample of the hundreds of studies published in 2008 in three major psychology journals—and he and Ratliff are both part of Project Implicit, a long-running collaboration that also provides free software for running behavioral experiments with standardized methods. Nosek wanted to use the software to see just how reproducible classic psychological experiments are. "I asked [Klein and Ratliff] if they would be interested in trying to scale up this idea and recruit other laboratories to get involved." They agreed.