Students examine soil viruses as part of an introductory biology course at the University of Pittsburgh in Pennsylvania.

Graham Hatfull

Undergraduate research would benefit from better comparative data, says Academies panel

More data about what works are needed to sustain the growing popularity of having undergraduates take part in research. That’s the conclusion of a report released yesterday by the National Academies of Sciences, Engineering, and Medicine, which notes that the lack of understanding of what makes these experiences effective makes it difficult to know how to improve these programs.

“There just isn’t enough comparable data” to concretely evaluate and compare the different types of programs, says James Gentile, dean emeritus of Hope College in Holland, Michigan, and chair of the committee that wrote the report. For example, he says, it’s not clear how the student experience of taking a course compares to being mentored one-on-one, or whether a research experience helps students learn how to interpret scientific data.

Research experiences have traditionally been seen as a way to prepare students for graduate school and a scientific career. But studies have shown that they can also help students acquire valuable soft skills such as communication. The experiences also foster a sense of belonging to the discipline and have been found to improve retention among minority students and women, groups historically underrepresented in science and engineering.

The dearth of information on what makes a program effective has not limited the growth of undergraduate research programs, particularly course-based ones. “We’re beyond a tipping point” in number of faculty who believe a research-based curriculum is preferable to one based on only lectures and canned labs, says Elizabeth Ambos, executive officer of the Council on Undergraduate Research in Washington, D.C. But knowing the essential elements would help universities better allocate limited resources, including hiring the right people, Gentile says. Such information would also tell department chairs what to expect from a specific course, says Dawn Rickey, program director at the National Science Foundation (NSF) in Arlington, Virginia, which funded the Academies study.

The report calls on NSF and other entities to support more data collection efforts. It also recommends that science faculty involved in undergraduate research programs delve into the literature from the educational sciences on how to evaluate learning experiences and team up with education researchers and social scientists for expertise.

Graham Hatfull, a biological sciences professor at the University of Pittsburgh in Pennsylvania who developed an introductory research course that has been widely adopted, has such a partnership with David Hanauer, an English professor at Indiana University of Pennsylvania. “It’s been fantastic,” Hatfull says, noting that the partnership has improved how the course is evaluated. “I would totally recommend that as a route to moving forward.”