Cultura Creative (RF) / Alamy Stock Photo

GREs don't predict grad school success. What does?

Summer is just beginning, but before we know it, the graduate school application and admission season will be upon us again. The ostensible goal of that arduous and anxiety-fraught procedure—and of the even more involved process of hiring and promoting faculty members at research universities—is to identify the next generation of productive scientists. But how can the relevant committees accomplish this when no one can really specify the qualities of mind, heart, character, spirit, and background that combine to produce great research?

This long-standing question has been bothering me since I read a pair of studies about grad school admissions practices, one from the University of North Carolina (UNC) in Chapel Hill and the other from Vanderbilt University in Nashville. (My colleague Maggie Kuo wrote about them when they were published in January.) These papers add to a growing body of information suggesting that widely used “objective” admissions measures, such as GRE test scores and GPA, are exactly the wrong way to go about picking future contributors to scientific progress. Yet, they continue to strongly influence admissions committees—probably to the detriment of individual aspiring scientists who, despite their brilliance, may not look good on paper, and of the entire scientific enterprise.

Objective failure

Top graduate programs, which receive many more applications than they can accept, often use “objective” numerical criteria as screening devices to shorten their lists of “serious” candidates, explained education researcher Julie Posselt of the University of Southern California in Los Angeles in Inside Graduate Admissions: Merit, Diversity, and Faculty Gatekeeping. (These criteria also happen to be especially effective at knocking women and members of underrepresented minority groups out of applicant pools.) As the authors of the new UNC study write, admissions committees often assume that “[t]ypical selection criteria [such as] standardized test scores, undergraduate GPA, letters of recommendation, a resume and/or personal statement highlighting relevant research or professional experience, and feedback from interviews with training faculty … correlate with research success in graduate school.”

Yet, both the UNC and Vanderbilt studies found that none of the supposedly objective credentials predicted anything recognizable as scientific productivity—not first-author publications, conference presentations, fellowships or grants won, completing the Ph.D., passing the qualifying exam, or proceeding swiftly to dissertation defense or to the degree. Among the Vanderbilt sample, GRE scores turned out to be only “moderate predictors of first semester grades” and “weak to moderate predictors of graduate GPA,” the authors report. There is no convincing evidence of a “relationship between general GRE scores and graduate student success in biomedical research,” they write. At UNC, grades, amount of previous research experience (among students who all had at least some research experience), and faculty interview ratings all failed to foretell grad school productivity. 

Another supposedly objective criterion that Posselt found to be influential during the screening process, especially with elite graduate departments, is the standing of an applicant’s undergraduate school. But a 2014 study from a professor at the University of California (UC), San Francisco, found that this metric also washed out as a predictor of grad school performance. Even a bachelor’s degree from one of the U.S. News & World Report “top 10 life sciences universities” made no discernible difference.

How to spot talent

If these widely used measures don’t work, what does? A group of researchers who devise and study metrics of research productivity and success wrote in 2012 that “the best way of predicting a scientist’s future success is for peers to evaluate scientific contributions and research depth.” They see the statistical method they developed as “useful” to “funding agencies, peer reviewers and hiring committees.” But even so, they make clear that, to ferret out that je ne sais quoi that foreshadows outstanding scientific performance, nothing compares to subjective judgments of quality by experienced researchers.

This emphasis on expert opinion also happens to align with the conclusions of the studies. The predictor that emerged as most powerful in both the UNC study and the UC San Francisco analysis was letters of recommendation from applicants’ undergraduate teachers—in other words, subjective assessments from people who presumably knew both them and their subjects well. Students who received top recommendations, the UNC co-authors suggest, show a “constellation of characteristics that typically correlate with research success [such as ability to] persevere and maintain focus and optimism in the face of regular challenges.” 

And if objective measures such as scores and grades don’t work in predicting students’ scientific promise, can objective measures such as numbers of publications do any better at spotting true intellectual promise among faculty candidates? Not according to physicist Peter Higgs, whose work on subatomic particles in the 1960s inspired the long but ultimately successful hunt for the eponymous Higgs boson. As he told The Guardian in 2013, while traveling to Stockholm to receive the Nobel Prize in Physics, for years he had been “an embarrassment to [his] department when they did research assessment exercises.” With fewer than 10 papers published since this 1964 breakthrough, he often responded to departmental requests for lists of recent publications with a simple reply: “None.” Given today’s requirement to publish frequently, he added, “It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964. … Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

Then there’s mathematician Yitang “Tom” Zhang, who was completely unknown—as in zero peer-reviewed publications and an adjunct teaching job—when, in 2013, at the age of 57 and 12 years out from receiving his Ph.D., he submitted a paper that astounded the mathematical world by solving a long-standing problem in number theory. Now hailed as a “genius” and a “celebrity,” he has since that triumph received numerous major prizes and appointments to two professorships, first at the University of New Hampshire and then UC Santa Barbara.

None of this is meant to suggest that every scanty publication list or so-so GRE score conceals hidden brilliance. But it does suggest a more reliable formula for spotting exceptional talent among people who appear not to possess it according to supposedly objective measures of scientific promise. It seems pretty likely that at least some of the people who knew and worked with Higgs and Zhang in their pre-fame days were aware of their abilities. It thus stands to reason that committees evaluating scientific potential, whether in grad school applicants or would-be faculty members, might benefit from paying more attention to what the scientists who know the candidates think of their minds and characters. Reading and considering such testimony would undoubtedly take more time and effort and could feel less “scientific” than looking at numbers, whether test scores, GPAs, or tallies of publications. But it appears more likely to pay off.

Read more Taken for Granted stories

Follow Science Careers

Search Jobs

Enter keywords, locations or job types to start searching for your new science career.

Top articles in Careers