Within a few months of the first known cases of COVID-19, scientists had published thousands of preprints on the mysterious outbreak that was becoming a raging pandemic, sparking a flood of news stories from media outlets accustomed to the more stately pace of peer-reviewed publications. Now, an analysis of some of the most widely covered COVID-19 preprints has found that these outlets vary widely in the way they refer to preprints, with about half of their stories failing to mention that the research was unreviewed or otherwise unverified.
The research is “timely and important,” says Abel Gustafson, a science communication researcher at the University of Cincinnati who wasn’t involved in the study. “This isn’t necessarily a failure of science or bad journalism—I think it’s more just a symptom of unprecedented times.”
Alice Fleerackers, a Ph.D. student in science communication at Simon Fraser University who led the research, thought her topic was somewhat niche. Outside of academia, “who’s ever heard of a preprint?” she asks. But after the pandemic hit, her research suddenly took on unexpected significance.
From a database of thousands of COVID-19 studies posted in the first 4 months of 2020 on the preprint servers medRxiv and bioRxiv, Fleerackers and her colleagues identified the 100 studies that had received the most media coverage. They focused on 15 media outlets that had published 457 articles about these studies—a grab bag of outlets that included prestigious publications such as Wired and The New York Times, specialist medical publications such as News Medical, aggregators that republish content from other outlets, and the Conversation, an outlet where academic researchers write about their own research area for a general audience.
Then they documented how the outlets described the preprints, looking at whether they used “uncertainty framing devices,” such as mentions that the work was not yet peer reviewed, or indications that the research was preliminary or needed verification. The researchers also looked at whether stories included a hyperlink to the preprint.
They found that 58% of the stories used at least one of the framing devices, and 91% linked to the relevant preprint. (Preprint servers typically note one or more of the framing devices themselves.) But whereas some outlets, such as Wired, used these devices in almost all their stories, others—including the Conversation, The New York Times, and aggregator Foreign Affairs New Zealand—used them less than half the time, the researchers report on 3 January in Health Communication.
Researchers found 15 media outlets that varied in how often they used at least one of four caveats in stories based on a preprint: mentions the study is a preprint, mentions work is unreviewed, mentions work is preliminary, or mentions verification is needed.
|Media outlet||% of stories that use at least one of four caveats|
|Foreign Affairs New Zealand||44|
|The New York Times||45|
The Conversation was a key player in the data, Fleerackers says, because many aggregators republish its content. This meant these articles amplified content that used uncertainty framing devices in only 42% of its preprint coverage. Fleerackers says it’s possible that researchers writing for the Conversation didn’t highlight a preprint’s uncertainty because they consider their audience scientifically literate. But it could also be that they’re reluctant to draw attention to the unpublished status of the studies, she says.
Alexandra Freeman, an evidence communication researcher at the University of Cambridge, says the pandemic has put journalists in a bit of a bind. If no one reported on preprints, she says, “we would have had nothing in the press about COVID-19” for months into the pandemic. She notes that peer review isn’t a guarantee of reliability, and peer-reviewed papers should get the same level of scrutiny as preprints. Journalists covering preprints should follow the standard practice of talking to experts in the field for their opinions—and with these experts often being the same people journals would approach, this practice “pretty much ends up as peer review,” she says. Uncertainty can also be communicated in ways beyond the specific uncertainty framing devices in this analysis, she adds—so stories that did not use them might not have been overconfident.
Peer review may be imperfect, but still offers an extra guard against the blind spots that researchers may have about their own work, Gustafson says. He thinks journalists should convey a higher degree of uncertainty for preprints than for peer-reviewed work. Science is built to move slowly and it would generally be preferable to wait for peer review before coverage, but the urgency of the pandemic changes the calculation, he says: “Perhaps it’s worth it to get scientific results out there as quickly as possible.”
*Clarification, 22 January, 12:10 p.m.: This story has been updated to clarify Fleerackers’s field of study and the type of stories published by the Conversation.