BARCELONA--Peer review is a pillar of science. Letting scientists anonymously judge each other's work is widely considered the "least bad way" to weed out weak manuscripts or research proposals and improve promising ones. But this common wisdom was questioned at a meeting here from 14 to 16 September in a study that found little evidence that peer review actually improves the quality of research papers.
Mention "peer review" and almost every scientist will spout stories about referees submitting nasty comments, sitting on a manuscript forever, or rejecting a paper only to repeat the study and steal the glory. Despite its flaws, most respected journals rely on peer review to choose which studies to publish. At the Fourth International Congress on Peer Review in Biomedical Publication, hundreds of editors of medical journals and academics met to examine the process.
In a survey that surprised many--and that some doubt--Tom Jefferson of the Cochrane Centre in Oxford, United Kingdom, and colleagues scrounged the literature for studies that had studied peer review rigorously. They found 19 papers that fit their criteria, but none clinched the case for peer review. For instance, nine studies looked at the effects of blinding the reviewers to the authors or vice versa, but none found much difference in quality. Two other studies found scant evidence that a standardized checklist led to better reviews, while two more revealed that training reviewers was practically useless. Only two papers compared the quality of papers submitted as a manuscript with the version that later appeared in print, and their results were difficult to generalize.
The study--which, like all contributions at the meeting, had been peer-reviewed--was "pretty depressing," concedes British Medical Journal editor Richard Smith. Still, Smith and other editors remain convinced that the review process helps, even if studies can't objectively show it.