Assigning authorship for research papers can be tricky. These approaches can help

Maybe you’ve heard about the cat who co-authored a scientific paper—but what about the dog? That would be Grandmother Liboiron, owned by Max Liboiron, an environmental scientist at the Memorial University of Newfoundland in Canada. The authorship wasn’t merely a quirky solution to a minor grammar problem, as was the case for the cat. Grandmother earned a spot on the paper because she “attended all meetings, provided support and care work, [and] kept authors from taking themselves too seriously,” Liboiron says.  

Liboiron has implemented an unconventional process for determining authorship that prioritizes consensus-building and equity. (In fact, the paper on which Grandmother is a co-author describes the lab’s approach.) All the lab’s members have a say in the author list, even if they weren’t involved in the project, with one major exception: Liboiron recuses herself from the process. The group meets, first sorting authors into categories depending on what type of labor they contributed—for example, discussing, writing, and editing, with the specific categories varying depending on the needs of the paper. Then, the order within each category is decided, which is the longest part of the process. People step up or step down from being considered depending on how much they feel they contributed. They also put others forward based on their work, including tasks such as cleaning up, arranging meetings, and making sure colleagues are doing alright. If there’s a dispute or a tie, the group considers factors such as who would benefit the most from being higher on the list, who has previously experienced theft from senior scientists, and who got the edge in author lists of previous papers.

“Let’s say I give you $5 and two other people $5, but you’re in debt, one person already has $100, and one person has no money. Giving them all $5 doesn’t actually resolve the issues even though you treated them all the same,” Liboiron says. “Equity recognizes that people start from very different positions.”  

Liboiron’s approach works well for her lab, but others have focused on more quantitative approaches. A recent attempt to develop a computational tool, however, highlights the challenges of appropriately and consistently determining authorship.

When Timothy Kassis, a bioengineer at the Massachusetts Institute of Technology in Cambridge, wanted to build an algorithm to help researchers determine the best author order depending on their contributions, the first steps were establishing a standard set of tasks that contribute to authorship and assigning a weight to each. Because there is significant variation among fields, he started by focusing on the life sciences, surveying more than 100 faculty members in biology, bioengineering, and biomedical engineering. The respondents generally agreed on how much value to give some categories, such as the time spent conducting experiments, but for others, such as the role of funding procurement, there was no consensus. Kassis realized that whatever method he uses to come up with the weights for these different factors, it’s always going to be subjective. He has since shelved the project.  

But other researchers have successfully implemented quantitative approaches on a smaller scale. Following an authorship dispute between a postdoc and a grad student 15 years ago, Stephen Kosslyn, now a professor emeritus in neuroscience and psychology at Harvard University, invented a system for his own lab. “I realized I needed some principled way to resolve these things,” Kosslyn says. He devised a system with 1000 total available points: 500 allocated for designing and conducting experiments and analyzing data, and 250 each for coming up with the idea and writing the paper. Once divided up between the contributors, ordering them is simple: most points to fewest. When numbers were close, Kosslyn says, people would discuss it and, if necessary, he would step in and allocate the points himself. Kosslyn recalls no authorship disputes in his lab after he started using this system.

Kosslyn’s point system also helps limit “default authorship” by senior researchers or those who were involved in a project initially but no longer contribute, says Rogier Kievit, who was formerly a research assistant in Kosslyn’s lab at Harvard and now runs a research group at the University of Cambridge in the United Kingdom. “It also solves the problem that is rare but not rare enough, where more junior authors who essentially do most of the work and should be first author get moved to second authorship if a paper suddenly looks to be particularly influential,” Kievit adds. “Almost any point-based system would, in such cases, put the onus on the person making the changes to defend them numerically.”

For his own lab, Kievit hasn’t found it necessary to implement the system. The group is small, the junior members are always the lead authors on papers resulting from their projects—“we establish that early on in the project so that there can be no ambiguity,” Kievit says—and “there hasn’t been any opportunity for issues.” But, he says, “Kosslyn’s system is certainly what I use as a mental guideline.”

Claudia von Bastian, a psychologist at the University of Sheffield in the United Kingdom, has twice used a similar point system—originally proposed in 1985—in cases when multiple co-authors substantially contributed. She generally prefers to discuss authorship at the start of a project, but she found that a quantitative tool was useful in these more difficult, unusual cases. “Having such an instrument was really helpful to bring the discussion back to a more factual and less emotional level, resulting in a solution everyone was happy with and felt fairly treated,” she says.

Journals can also get in on the action. Recently, Rethinking Ecology implemented an author contribution index, which requires that authors report how much each contributed to the paper. The percentage-based system helps address the problem of gift authorship, explains Editor-in-Chief Stéphane Boyer, based at the University of Tours in France. “When more authors are added as a gift, they all need to be attributed a percentage of the work,” meaning that either genuine authors have to give away their own credit or it becomes clear that the added authors didn’t contribute very much. Publishing these percentages with the paper also provides a quick way for recruiters to see how much work an author put in, Boyer notes.

Amid concerns about fairness in authorship, researchers also need to consider systemic inequality, Liboiron argues. “There are certain people whom in science are consistently devalued,” including women, people of color, junior faculty, transgender people, and others, she says. “Almost every research institution or lab that I’ve worked in for my entire career, starting at undergrad, I was shuffled down in author order or left out,” she says.  

When it comes to gender disparities in authorship, there’s data to illustrate the issue: Women are more likely to say that principal investigators determined author lists without consulting the team, to encounter authorship disputes, and to observe hostile behavior due to authorship disagreements, according to an unpublished survey of more than 6000 scholars worldwide conducted by Cassidy Sugimoto, an information scientist at Indiana University in Bloomington. On the flip side, women are more likely to discuss authorship-related issues at the start of projects, the survey finds.

Sugimoto, for one, isn’t convinced that picking author lists can ever be automated or standardized to eliminate all its underlying social biases. “Authorship is not a value-neutral proposition,” she says. “Many power hierarchies are going into the distribution of authors on a byline and in their roles in science.”

Follow Science Careers

Search Jobs

Enter keywords, locations or job types to start searching for your new science career.

Top articles in Careers