Even if you don’t believe lizard people and the Illuminati secretly run our planet, the world really is filled with unseen influences. The languid music in the grocery store makes us walk slower and spend more money, and product placements in TV and movies leave us inexplicably craving things like Coca-Cola and Spicy Sweet Chili Doritos. Most people treat these subliminal messages as an unavoidable part of daily life, maybe even as a tool that we might ourselves exploit at some point. But what if these invisible forces were doing more than leaving our wallets a few dollars lighter? What if they are shaping some voters’ choices?
With the presidential election around the corner, Science asked experts in computer science, business, and law to weigh in on how companies like Google and Facebook, which function as the primary gateway to online information for millions of voters, could influence the outcome.
How do search engines influence people?
Last summer, Science reported on something called the search engine manipulation effect. Because companies like Google have gotten so good at providing the best links first, the higher an item appears on a list of search results, the more users trust it. That’s OK if you’re looking for the best place to buy a set of kitchen utensils or back-to-school supplies, but the study’s lead author, research psychologist Robert Epstein of the American Institute for Behavioral Research and Technology in Vista, California, showed that by simply putting links for one candidate above another in a rigged search, he and his co-author could influence how undecided voters choose a candidate. In one phase of the experiment, they tested a group of actual voters before the 2014 general election in India and found that biased search results could increase the number of undecided voters who chose one candidate by 12% or more.
The effect was largely invisible to the study participants; most had no idea they were seeing biased results. But even if they did, they thought the search engine was merely doing its job and ranking a better candidate higher than his or her opponent.
Other companies, especially Facebook and Twitter, wield similar influence with their own algorithms. Jonathan Zittrain, a professor of law and computer science at Harvard University, has written about Facebook’s unique ability to mobilize voters by placing reminders in their newsfeeds. If it wanted to, Facebook could mobilize users likely to vote in line with the company’s interests (as it tried to do in India) based on their demographic group and geographic location—a sort of digital gerrymandering capable of garnering hundreds of thousands of additional votes.
How much could Google influence the 2016 election?
By Epstein’s calculations, biased Google results could shift the vote in November by up to 2%, or about 2.6 million votes. This may not seem huge, but many presidential elections in the United States have been decided by margins narrower than that. (Think Bush versus Gore in 2000.) As of the writing of this article, FiveThirtyEight shows Secretary Hillary Clinton leading Donald Trump in the polls by 6.4%, but that margin has expanded and contracted considerably over the last few months. At the end of July, for instance, Trump trailed by less than 1% of the popular vote—well within the realm of the search engine manipulation effect (although the Electoral College does complicate the discussion somewhat). State and local elections may be even more vulnerable.
Is there any evidence to suggest that internet gatekeepers are taking advantage of this power?
No. But this question gets at the real crux of the problem because, for now anyway, there’s really no way to find out. Generally, regulators can’t audit Google or Facebook or any other tech giant to find out how their proprietary algorithms determine the content we see on a screen. Intellectual property laws allow these companies to keep private the specifics of search and newsfeed algorithms, making it extremely difficult to parse out any bias in content that users see.
But Epstein believes that Google is capitalizing on the invisible influence of its search engine, to the benefit of the democratic nominee. Although he doesn’t have evidence that search results are biased, Epstein has discovered some strange discrepancies that seem to favor Clinton in a different facet of Google’s search engine: its search suggestions.
The power of suggestion
Epstein and his colleague Ronald Robertson spent most of their summer tracking the search suggestions that Google automatically generates when you begin to type a query, so called “autocomplete.” Using suggestive phrases like “Hillary Clinton is” or “anti Donald Trump,” the researchers repeatedly checked to see whether the search engine was offering more denigrating results for one candidate. Their results are not published in a scientific journal and have not yet been peer reviewed, but Epstein says they show that it’s difficult to get Google to suggest negative search queries for Clinton, compared with Senator Bernie Sanders (D–VT), or with Trump and other Republican primary candidates. Epstein’s experiments showed that the engine suggested “little marco” and “lying ted”—two of Trump’s infamous derogatory monikers—but would not prompt “crooked hillary” or “corrupt kaine.”
Search engine algorithms are proprietary, so it’s impossible to know with certainty what controls the results. For example, a search engine could come up with autofill suggestions based on everything from someone’s personal browsing history to their location (based on internet protocol address) or even the sum results of everyone else’s search queries. In response to separate allegations of political bias in its autocomplete algorithm, Google wrote in a June blog post that it designed its autocomplete algorithm to avoid completing searches for personal names with “offensive or disparaging” words. “We made this change a while ago following feedback that Autocomplete too often predicted offensive, hurtful or inappropriate queries about people. This filter operates according to the same rules no matter who the person is.”
Epstein, of course, doesn’t buy it. “The way their statement is worded is that they eliminate all negatives for everybody, and that is positively not true,” he says. Google did not offer any comment in response, and directed Science back to its blog post.
But whatever is actually going on, any bias—manually added or otherwise—could have a dramatic effect on undecided voters. According to Epstein’s unpublished research, negatively worded posts attracted more than 15 times as many clicks from undecided voters as neutrally worded ones in matched control questions. So suppressing negative search results might be one very simple way, he says, of shifting online opinion.
Is there any way to keep tech giants from biasing our online world?
Yes. But almost all of them require clever technologies that haven’t yet been invented, or assistance from the government, the law, or the tech companies themselves.
Ben Edelman, a professor of business administration at Harvard, has been hunting for bias in Google results for years. He suggests that we might be able to create browser plugins that catch and correct biases in search results. One such plugin is called Focus on the User. After Google tweaked its algorithm to prioritize restaurant reviews from Google+ over Yelp in 2010, programmers made their plugin add the missing Yelp links back into the search results by manipulating Google’s own indexing tools. Developing other, similar tools may be possible, but it’s unlikely that a single plugin will ever be able to detect every instance of search result bias.
Instead of auditing these tech companies against their will, it might be easier and more productive to have them work directly with regulators. Harvard’s Jonathan Zittrain and his colleague Jack Balkin, a professor of constitutional law at Yale University, have argued that tech companies whose algorithms function as gateways to the internet should act as “information fiduciaries.” In the same way a doctor or a broker must act in the best interest of a patient or client, Zittrain argues that Google and Facebook should take on a legal and ethical responsibility to act in the best interest of their users. “If at election time, [Facebook] chose to only put alerts in the feeds of one group of voters rather than another … I think that would be not fair to the users who they’re not alerting,” he says.
Zittrain says a similar bargain could entice tech companies to buy into the role of information fiduciary by offering them protection from lawsuits about how they are allowed to use and sell users’ personal information. In the United States, these suits are typically handled at the state level, where laws governing privacy vary significantly by jurisdiction. This comes at considerable risk to their business. Zittrain says the tech giants might take up the role of information fiduciary if privacy lawsuits would be handled by a lone regulatory authority instead of as they are now on a state-by-state basis. “That’s an example of how to get it done with honey rather than vinegar.”
Other experts, such as Frank Pasquale, a law professor at the University of Maryland in Baltimore, suggest that the government should regulate or audit for bias only in specific areas, such as elections. “As a matter of election regulations, I would foresee a future where the Federal Election Commission was given authority to investigate this issue,” he says. “I do support the information fiduciary idea, but I don’t think it should be a matter of them having to buy in or not. The state is sovereign. We can regulate them.”
A final, extreme solution to the online information monopoly could be to build a public search engine owned by the people. “Google pretends to be the public library, but it isn’t,” Epstein says. “Public libraries don’t … track people [and] they don’t sell the history of your book borrowing to other companies. They simply help you find stuff. That’s what we need. We need to take Google’s search engine and make it public.”