CHEVY CHASE, MARYLAND—Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.
The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review.
The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.
Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations.
“We saw huge benefits to [publishing reviews] that outweigh the risks,” said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.
But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to “make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process,” said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.
Support for publishing reviews
An informal poll of meeting attendees found overwhelming support for publishing reviews and making them easily searchable online by assigning them the digital object identifier (DOI) tracking numbers that other scientific publications receive. And that result aligned with views reported in a much larger survey of some 3000 authors, reviewers, and editors published in PLOS ONE in December 2017. That survey, conducted as part of the OpenAIRE2020 project funded by the European Commission to implement open-access policies, found 60% of respondents supported publishing reviews.
To date, however, very few journals publish the evaluations, speakers noted. In one survey of 3700 journals studied by RAND Europe, only about 2% allowed the practice.
One journal that does publish peer reviews is Nature Communications, an open-access journal published by the SpringerNature publishing group based in Berlin. Both authors and reviewers must agree to publish the reviews, which are not signed. So far, 62% of authors have consented, said Philip Campbell, editor-in-chief of Nature, who attended the meeting. Only a small portion of reviewers has declined.
Campbell said Nature, the publisher’s flagship journal, may eventually follow suit in publishing reviews, although he said couldn’t speak for the publisher’s 2000 other journals. “I think the mood of the meeting here is one that will be supported in the long run,” he said.
Such comments suggest publishing reviews “is not some completely wild experiment,” notes Ron Vale, a director of ASAPbio and a cell biologist at the University of California, San Francisco. “The question is: Is this something that our profession really wants to advance as standard practice in how we do business in science communication?”
To name or not?
There were divergent takes, however, on whether reviewers should be publicly identified.
Some attendees argued the risks would outweigh benefits. If reviewers know their names will be public, they might be tempted to pull their punches, especially if they are junior faculty members concerned about retaliation from more senior colleagues. Some reviewers might decline to participate at all. For instance, the U.S. National Institutes of Health announced this month it would close its PubMed Commons commenting platform, which did not allow anonymous comments, because it attracted few users.
But others praised greater openness in peer review as an antidote to shortcomings of the current system. Signed reviews could encourage reviewers to produce more careful evaluations, for example, and make fewer gratuitously negative comments. And publicly identifying and crediting reviewers for their work could help them win tenure and promotions.
Identifying reviewers could also help eliminate favoritism, some attendees argued. They noted that some researchers believe that some journal editors give preferential treatment to manuscripts from authors who also serve as reviewers for those journals. Signed reviews could help outsiders evaluate whether such bias is present.
So far, very few publishers support publishing signed reviews. In the RAND Europe survey, for example, just 3.5% of journals—or about 130 titles—backed the practice.
But a few publishers are asking reviewers to identify themselves. F1000Research, an online service based in London that reviews and publishes scholarly articles, requires signed reviews. Rebecca Lawrence, the service’s managing director, said the policy has not made it difficult for the group to recruit reviewers.
Expanding the pool
Another question discussed at the meeting was how to expand the pool of reviewers, which journal editors identified as an important need, given the increasing number of submitted manuscripts they are seeing. One solution speakers supported is expanding “journal clubs”—groups of graduate students and other scientists who regularly meet to evaluate published articles as a training exercise in scientific scholarship. In addition, journals could invite scientific societies to recruit their members to serve as reviewers.
There was less certainty about another idea: recruiting reviewers from online fora for discussing so-called preprints, or papers that have yet to receive peer review. One hurdle is logistical: Many of the comments are posted anonymously, making it hard for editors to find the reviewer.
Some researchers hope last week’s meeting will help jump-start changes in peer review—much in the way a 2016 HHMI meeting on preprints helped spur the rapid development of that tool for scientific communication. Preprints were “something that most of us 2 years ago … had never heard of,” said meeting participant Iain Cheeseman, a cell biologist at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. Now, preprints have “just transformed the daily way that we think about doing science. … I’m quite optimistic about our ability as a community to identify some of these solutions [for peer review] and use them to change the way things work.”
Meeting organizers plan follow-up steps, including publishing a white paper summarizing the conference. Materials from the meeting will be posted on its website. And HHMI, the Wellcome Trust, and the Chan Zuckerberg Initiative, a company founded by Facebook Founder Mark Zuckerberg and his wife Priscilla Chan, have agreed to set up a repository of experiments in peer review to help highlight successful practices.