If you’re tired of swiping left and right to approve or reject the faces of other people, try something else: rating scientific papers. A web application inspired by the dating app Tinder lets you make snap judgments about preprints—papers published online before peer review—simply by swiping left, right, up, or down.
Papr brands itself as “Tinder for preprints” and is almost as superficial as the matchmaker: For now, you only get to see abstracts, not the full papers, and you have to rate them in one of four categories: “exciting and probable,” “exciting and questionable,” “boring and probable,” or “boring and questionable.” (On desktop computers, you don’t swipe but drag the abstract.) The endless stream of abstracts comes from the preprint server bioRxiv.
Papr co-creater Jeff Leek, a biostatistician at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, released an earlier version of Papr late last year but only started publicizing the app on social media earlier this month after his colleagues added a few more features, including a recommendation engine that suggests studies based on your preferences, an option to download your ratings along with links to the full preprints on bioRxiv, and suggestions for Twitter users with similar tastes as yours.
The goal is to help researchers navigate the overwhelming number of new papers and uncover interdisciplinary overlap, Leek says. Scientists already use social media to discover new papers, he says; Papr aims to simplify that process and capture people’s evaluations along the way. Other preprint servers could be added later, he says.
Four rating categories is enough, Leek says; other services, including PubPeer, offer space for longer comments and discussions. To prevent readers from giving their rivals’ papers bad ratings or rate a paper as interesting just because it was written by a famous scientist, Papr doesn’t show author names and doesn’t let you search for a specific preprint or author.
“For me, the importance of Papr is illustrating that preprint services like bioRxiv enable novel methods of evaluation to emerge,” says Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia.
“We don’t believe that the data we are collecting is any kind of realistic peer review, but it does tell us something about the types of papers people find interesting and what leads them to be suspicious,” Leek says. “Ultimately we hope to correlate this data with information about where the papers are published, retractions, and other more in-depth measurements of paper quality and interest.”
But don’t take Papr too seriously, because its developers don’t. “This app is provided solely for entertainment of the scientific community and may be taken down at any time with no notice because Jeff gets tired of it,” the Papr website says.