Tens of thousands braved the rain last year in Washington, D.C., for a March for Science that will be repeated this weekend.

B. Douthitt/Science

Got data? Survey of 2017 March for Science doesn’t make the grade

A group of researchers has released the first results of a large survey of those who participated in and supported last year’s March for Science. Some social scientists say the analysis is fundamentally flawed and reflects poorly on an organization that champions scientific rigor. March organizers acknowledge the survey’s limitations but say it has provided them with important insights into what motivates their supporters.

The volunteer organizers of the 22 April 2017 march, an ambitious experiment in global science advocacy, were eager to learn all they could about the more than 1 million people who had participated. So, 6 weeks after the event, they notified their more than 200,000 supporters that a survey developed by researchers at George Mason University (GMU) in Fairfax, Virginia, was available online. The 72-question survey asked for demographic information, as well as why respondents had marched and what they thought about government policies and public attitudes toward science.

Last week, days before the second annual march on 14 April, the GMU researchers posted the results. A solid majority of the 20,000 respondents said they thought the country was headed in the wrong direction, a situation almost all blamed on the policies of President Donald Trump and the Republican-led Congress. Their biggest fears were that those government officials would disregard scientific evidence and cut research funding, although only about half thought the march would forestall either action.

The results probably won’t surprise march supporters (including AAAS, which publishes ScienceInsider) and those who have followed the effort. But social scientists who do surveys for a living say the data don’t pass the smell test.

“They are flawed at the most basic level,” says Michael Heaney, a political scientist at the University of Michigan in Ann Arbor. “If a student in an introductory statistics class had asked me if they could do this, my answer would have been ‘no.’”

Coming up short

The results suffer from two fundamental problems, says Heaney, who deployed a team to conduct a randomized survey of participants at last year’s flagship march in Washington, D.C., and plans to do it again this weekend. The first is that the respondents were self-selected and, thus, not likely to be representative of the organization’s entire email list. Specifically, says Heaney, any characterization of the demographics, attitudes, and activities of supporters would be unfounded.

Nor are the respondents necessarily representative of those who marched. For example, Heaney’s data, drawn from a truly randomized sample attending the Washington, D.C., march, found that only about 15% of the crowd had been mobilized by the March for Science movement. The vast majority said they attended because of an affiliation with other organizations, or through any number of other routes. That suggests the people on March for Science’s mailing list are not a good proxy for those who took to the streets.

Given those fundamental errors, Heaney says, the GMU survey’s 10% response rate—reasonable by survey industry standards—is a secondary issue. Even a 100% response rate, he notes, would have meant only that all the movement’s supporters had weighed in, not that everyone who marched had been counted.

“What they have produced is unbridled advocacy,” says Heaney, who studies the nature of political protest movements. “They aren’t alone—it’s something that thousands of interest groups do every day. It’s cheap and easy. But it’s not science.”

Fueling the movement

John Cook, a research assistant professor at GMU and co-author of the survey, disagrees that the results would not stand up to peer review. He says the team routinely posts “descriptive statistics of our research” on its website, written in a reader-friendly style, before drilling down on specific questions and submitting the analysis to a journal. He also thinks “asking people what motivates them to march and what they hope to achieve” is worthwhile to the March for Science movement.

Heaney recognizes the value of that second goal. “Any polling that an organization does of its followers is useful,” he says. “Most advocacy groups don’t bother to do that, so good for them. And don’t get me wrong, I wholeheartedly support the goals of the March for Science. I just don’t want to see their results labeled as research.”

Caroline Weinberg, interim executive director of March for Science, agrees that the GMU survey provides “valuable information about our supporters” and says that it was never intended to pass muster with peer reviewers. “The fact that the data have limitations doesn’t mean that you disregard the data,” says Weinberg, who is based in New York City. She agrees that it would be “bad science to say the survey represents all of our supporters and everyone who marched. But it’s also bad science to simply ignore the data.”

In preparing for last year’s march, organizers had posted a 42-question survey that supporters could fill out at the same time they RSVPed for a march. But the steady rain during the flagship march in Washington, D.C., prevented organizers from collecting any information from those who actually showed up.

Weinberg says there were some differences between the GMU survey and her organization’s RSVP survey. “About 80% of those who planned to march told us it would their first march,” she notes, whereas the GMU survey found that 77% of respondents were veterans of previous marches. In addition, she says the RSVP survey respondents were much less likely to identify themselves as scientists than were those who answered the GMU survey.  

March for Science organizers will be circulating questionnaires this weekend at both the Washington, D.C., event and many of the satellite marches. But once again the goal is not a scientifically valid survey, says Kristen Gunther, a March for Science staff member based in Lander, Wyoming. “We want to package the information and take it to elected officials with the message, ‘Here are the things our supporters are most concerned about,’” she explains. Participants will also be asked to rank how well they think those officials are dealing with the issues on the list.

“We understand that we will be getting [responses from] the most motivated people,” Gunther says. “But we want to hear from these people. If our goal is to influence policymakers, they are the ones who are willing to write letters, show up at town halls, and engage in other advocacy efforts.”

Gunter’s assumption is borne out in a comparison of the GMU findings with those drawn from scientifically rigorous surveys at the 2017 march by Heaney and Dana Fisher, a sociologist at the University of Maryland in College Park. For example, GMU found that 94% of respondents had donated to a political organization, compared with 78% of Heaney’s sample. GMU also found 70% of the respondents had interacted with the media—e.g. writing a letter to the editor, being interviewed, or calling into a radio talk show. In contrast, Fisher found that just 15% of marchers reported similar media exposure. Fisher, who is studying the nature of large-scale marches since Trump’s inauguration, has posted her preliminary findings on a blog, American Resistance.

Treading carefully

One issue that dogged the first March for Science was the extent to which it would be seen as a polarizing protest against the policies of the Trump administration and the Republican Congress rather than as a rally in support of science. Organizers were so concerned about being labeled partisan that they urged the GMU researchers not to ask about political ideology or affiliation. The scientists complied, leaving a hole in what is otherwise a deep dive into the level of political activism among supporters.

In contrast, Heaney was under no constraints. In a paper published last week in the social science journal Contexts, Heaney compares the politics of participants in 10 marches held last year in Washington, D.C. The March for Science was clearly on the liberal end of the political spectrum, he finds, nearly matching the partisan flavor of the Women’s March and the Tax March and exceeding the liberal leanings of those in the People’s Climate March.

That’s hardly surprising, Heaney says. “There is little doubt that protesters are highly involved in other forms of partisan politics,” he writes in the paper, archly titled “Making protest great again.” Heaney notes that “Trump has taken a period that would have seen protests regardless of the winner of the 2016 presidential election and helped to turn it into a time of nearly continuous grassroots resistance.”

*Clarification, 10 April, 4:35 p.m.: This story has been revised to reflect the views of John Cook on publishing the results of the GMU survey.

Survey researchers respond to this story:

In his 9 April ScienceInsider news article, reporter Jeffrey Mervis quotes social scientist Michael Heaney criticizing a recent survey report that we authored. Here we respond.

  • Criticism:  March for Science (MfS) listserv subscribers are not a “good proxy for those who took to the streets,” and many marchers were not directly mobilized by MfS.  Therefore, our results contribute little understanding of the MfS.

The premise of this criticism seems reasonable, but the conclusion is flawed.  The MfS "movement” has grown beyond a single event, and it is important to understand the perspectives and experiences of the larger pool of people who subscribed to the MfS listserv, but may not have participated in an event. And it makes perfect sense that MfS partner organizations—such as AAAS (publisher of ScienceInsider)—mobilized many marchers, because partner organizations have long-standing relationships with their large memberships.  This does not, however, diminish the value of understanding the perspectives of this group of people.

  • Criticism:  Membership surveys of interest groups aren’t “science.”

Membership surveys have contributed much to scientists’ understanding of the social world. For example, the Pew Research Center in collaboration with academic researchers has conducted AAAS member surveys that were published in both Science and the peer reviewed literature (here and here). Similarly, we have published surveys of TV weathercasters, physicians, and public health department directors, illuminating important facts about key stakeholder’s perspectives about climate change.  Our MfS survey follows in this vein. Our methods are clearly stated, and conclusions are limited to the population we surveyed.

  • Criticism: Our survey respondents were self-selected and not representative of the listserv.

The concern here, called nonresponse bias, is that those who agree to participate in a survey may be substantively different to those who decline in ways that we do not understand. This is a valid concern for all surveys that have no mechanism to estimate the nonresponse bias, including ours. It does not render our results uninterpretable, although it is important to consider how nonresponse may have affected the findings.

Conclusion

Surveys of those who engaged with the MfS organization and joined its listserv offer a unique window into the psyche of those who are motivated to engage in activism for science at this moment in history. Our approach was not without limitations; however, it also had the strength of surveying more (by several orders of magnitude) and different (across diverse geographies; those who wanted to march, but couldn’t) individuals than was possible through site interviews. We believe these methodologies are complementary and look forward to seeing results side-by-side as more research emerges.

Teresa Myers, Assistant Research Professor

John Kotcher, Assistant Research Professor

John Cook, Assistant Research Professor

Lindsey Beall, Doctoral Student

Ed Maibach, University Professor

All of the authors are at George Mason University, Center for Climate Change Communication in Fairfax, Virginia.