James Anderson studies how epithelial cells that act as barriers within cells, tissue, or organs are bound together. In his lab at the National Heart, Lung, and Blood Institute on the Bethesda, Maryland, campus of the National Institutes of Health (NIH), the cell biologist uses antibodies to tag the proteins he and his colleagues are looking for, but only after making sure they are labeling with the proper antibody. To do without such baseline information, the entire experiment could be irreproducible.
“It’s not just the protein you buy in a tube from a vendor that can vary,” he says. “It’s also the protocol you use in your own lab, the solutions, the pHs, the ionic strengths, and the different applications. The validation of each antibody can turn into a small research project in itself.”
Outside the lab, Anderson’s primary job is to lead the Division of Program Coordination, Planning, and Strategic Initiatives within the NIH Office of the Director. And in that capacity he must deal with an emerging crisis in the global biomedical research—an inability to replicate experimental results. A key component of this so-called reproducibility crisis is the failure of antibodies to perform as promised. One reason they fail, according to a new survey, is that fewer of his research colleagues are following in his footsteps and validating the antibodies they are using.
Last summer, 504 researchers filled out an online survey conducted by the Global Biological Standards Institute (GBSI), a nonprofit based in Washington, D.C. More than half said they had never received training on the importance of validating research antibodies, nor instructions on how to do it. (The respondents ranged from undergraduates to senior researchers and included graduate students, postdocs, laboratory technicians, and junior faculty members.)
More than 70% of the respondents said that validating antibodies takes too much time. But “the consequences of not validating are so much bigger,” asserts Alison Banham, a cancer researcher at the University of Oxford in the United Kingdom. “Actually, if you haven’t validated your antibody and your hypothesis is wrong, the time you might spend exploring that and having to solve those issues are probably considerably more time investment than you actually validating the antibody. And that is without considering the reputational risk that you have of publishing something that’s wrong.”
Antibodies are an essential element in biomedical research labs, with some researchers using up to 100 different types annually. Nearly 60% said they rely on commercial suppliers for the majority of their antibodies, at a cost that may exceed $10,000 a year. Such expansive antibody use left Anderson and others baffled by what the survey suggests is a laissez-faire attitude toward validation.
Roughly 85% of respondents said they have validated at least some of the antibodies they use. But the percentages are much lower depending on the years a scientist has used antibodies and where they get them from. Only 22% of early-career scientists bother to validate antibodies made in-house, and 57% of them give commercially made antibodies a pass. Even among senior scientists, nearly a quarter don’t validate commercial antibodies, and 39% don’t make sure their own pass muster. Young scientists are more likely than their senior colleagues (27% to 16.5%) to feel that validation is not even necessary.
That faith may be misplaced, as there is a lot that can go wrong while using an antibody. Beyond simply tagging the wrong substance, it could bind to more than one target or to none at all. Many antibodies are created in controlled conditions. If a scientist’s experiment changed the structure of the protein they are searching for, it’s possible the antibody would not be able to bind to the protein. Commercially produced antibodies (the source of most antibodies used by researchers) pose another set of potential problems. For starters, they come with no standard amount of information. The same antibody may be sold by two companies under different names, Banham says, and it is often unclear if or how the company validated them.
Getting researchers to validate will require changing a culture that values speedy purchases that can be accepted at face value, says Sharon Milgram, the director of NIH’s Office of Intramural Training & Education (OITE). Milgram works with 5000 scientists who range from high school students to postdoctoral fellows, and she says that getting the “right” answer quickly is a high priority for them. “Maybe we really need to encourage students to question every reagent they get, whether it be from somebody at a company, from somebody who sent it to them, or from their colleague who sits at the next bay and handed it to them,” she said. “I think it would be asking a lot of a vulnerable population working in a hierarchy. But these data make me think that maybe we need to have that discussion.”
The OITE provides one-stop shopping on career advice, networking training, and ethical guidance. But because the program is open to all research fields, Milgram says, the training on research ethics and reproducibility is presented in broad strokes. The trainees are told to talk to their principal investigators (PIs) about situations unique to their field, so if a trainee is in a lab that uses antibodies, the PI should be explaining why and how to validate. But the survey results could indicate those conversations are not happening, she says. “As a community, we need to not only educate the students. We also need to educate the PIs on how profound their influence can be.”
The problem may be deeply rooted, she adds. Only 44% of researchers with more than 10 years of experience validated antibodies “obtained from another source,” such as a fellow researcher who shares a reagent. Milgram said she could readily imagine that a young researcher taught never to question a superior might never question a colleague after establishing their own lab.
Establishing clear standards seems like an obvious first step. Past attempts to create standards have been “too complicated and ambitious,” says Leonard Freedman, GBSI president and lead author of the paper analyzing the survey. But Freedman hopes that a group of researchers, journal editors, and biotech company executives will be able to do exactly that at a GBSI-sponsored meeting in September. He jokes that he plans to lock the doors until consensus is reached. Even if that dramatic strategy succeeds, however, participants will need to come up with a way to get labs to follow the standards they have adopted.
Freedman would like researchers to adopt voluntary guidelines, although others say that it will take grant funders and journals to enforce the rules. NIH’s new grant guidelines require researchers to explain how they plan to validate, and Anderson says the pressure to follow guidelines will come from peer reviewers and journals that require sufficient validation. Either way, the survey results indicate that scientists in every stage of their careers must become involved.
“I would say this is not just an issue of younger investigators, because they need mentors,” Anderson says. “The application of rigorous research methods is really the responsibility of the whole community working together.”