Networks of people can coordinate better with bots in the mix.

Devrimb/iStock

Bad bots do good: Random artificial intelligence helps people coordinate

Unpredictable artificial intelligence (AI) doesn’t sound like a good thing. But a new study shows that computers that behave randomly can push us to better coordinate our actions with others and accomplish tasks more quickly. The approach could ease traffic flow, improve corporate strategy, and possibly even tighten marriages.

If you want a project to do well, it’s not enough to have its members get along; they need to share a game plan. One solution is top-down control: A leader or governing body tells everyone what to do. A more counterintuitive idea is to make people randomly deviate from what myopically seems best for them. That could, according to so-called complex system theory, prod the whole system into global alignment. Two people at an impasse in a negotiation might hit on a new solution if one suggests something crazy, for example.

To figure out whether random AI can help people coordinate, Hirokazu Shirado, a sociologist and systems engineer, and Nicholas Christakis, a sociologist and physician, both at Yale University, asked volunteers to play a simple online game. Each person controlled one node among 20 in a network. The nodes were colored green, orange, or purple, and people could change their node color at any time. The goal was for no two adjacent nodes to share the same color, but players could see only their color and the colors of the nodes to which they were connected, so sometimes settling conflicts with neighbors raised unseen conflicts between those neighbors and their neighbors. If the network achieved the goal before the 5-minute time limit was up, all players in the network received extra payment. The researchers recruited 4000 players and placed them in 230 randomly generated networks.

Some of the networks had 20 people controlling the nodes, but others had three of the most central or well-connected nodes already colored in such a way that they fit one of the solutions. (Each network had multiple solutions.) And some of the networks had 17 people and three bots, or simple AI programs, in charge of the nodes. In some networks, the bot-controlled nodes were placed centrally, in some they were placed peripherally, and in some they were placed randomly. The bots also varied in how much noise, or randomness, influenced their choice of node color. In some networks, every 1.5 seconds the bots picked whatever color differed from the greatest number of neighbors—generally a good strategy among people playing the game. In some networks, they followed this strategy, but 10% of the time they would pick randomly. And in some networks, they would pick randomly 30% of the time.

All of the networks with bots performed the same as the networks with 20 people, except for one type. The networks in which the bots were placed centrally and randomized their decisions 10% of the time outperformed the all-human networks. They solved the coordination game within the time limit more frequently (85% versus 67% of the time). And the median time spent on the task was 103 seconds versus 232 seconds, a significant difference, the researchers report today in Nature. The fact that bots with 0% noise or 30% noise did not outperform humans means that there’s a Goldilocks zone of randomness.

What’s more, the bot-aided networks performed just as well as the networks that already had a head start—those with three nodes preset to fit a solution. But whereas the set-color networks required top-down control, the noisy bots achieved equal results with just a bit of local randomness. “We get the same bang,” Christakis says. “To me that was a beautiful result.”

Further analysis showed that bots’ slightly noisy behavior benefited the networks in part by setting an example for others. Some people also showed “noise,” by occasionally deciding to pick a color that conflicted with their neighbors. The noise level of bots influenced the noise level in people—even those several nodes away, suggesting a ripple effect.

“The bots are helping humans to help themselves,” Christakis says. Without the bot-added noise, people often got stuck in ruts where each person had picked a color that conflicted with the smallest number of neighbors but the network as a whole still had conflicts. “In a way, these bots are serving a teaching function,” Christakis adds. If you see a neighbor (bot or human) change color frequently, you might decide to do so, too. He notes that highly sophisticated AI programs like AlphaGo may help people play Go better, but here, people learned even from “dumb AI.”

There’s some precedent for perturbation increasing harmony—random mutations enabled evolution to produce complex organisms, for example. “It’s a cool little study,” says Michael Richardson, a psychologist at the University of Cincinnati in Ohio. “The results are pretty consistent with what you’d expect from complex systems theory.”

Colin Camerer, an economist at the California Institute of Technology in Pasadena who reviewed the paper, likes that the researchers used a simple, rigorous method to produce a wealth of data that demonstrated how randomness can bring about order in social interactions. But he notes that because of the model’s simplicity, “it’s hard to see a close analog to something that happens in an organization.” The researchers hope to expand the work to include more complex and realistic collaborations, perhaps between people and robots with military or manufacturing purposes.

Bots could help people help themselves in lots of ways, besides introducing noise. One can imagine chatbots mediating relationship therapy, by guiding couples toward compromise without succumbing to exasperation or boredom. A recent study found that Tweetbots posing as real people could shame racists into using fewer racial slurs. And Shirado says that bots are good dissenters because they can take the anger. Commending dissent generally, he notes that he used to work at Sony, where one leader pushed for a video game project they didn’t want. It became the PlayStation.

Christakis mentions a dear friend who is wealthy but is considered by many to be difficult. “He told me he’s never been asked to be on a board of directors, because he’s very contrarian,” Christakis says. “When I showed him this paper, he said ‘This is fantastic! This vindicates my way of being! This is why jerks like me should be on every corporate board.’”