It's over. Google’s AlphaGo computer program has beaten the human world champion, Lee Sedol, four games to one in the ancient game of Go. But will any of this matter in the long run? In this story, published last week, Science investigates what this match means for the future of artificial intelligence and even how we play our favorite games.
A last hurrah for humanity? Or the final triumph of our silicon overlords?
One of those storylines will surely emerge over the next week at the Four Seasons Hotel in Seoul, where a computer will play a match against the world’s top human player in the ancient Chinese board game of Go. On one side of the board will sit Lee Sedol of South Korea, 33, who has dominated the Go world for more than a decade. On the other will be AlphaGo, age 2, a neural network–based artificial intelligence from Google’s DeepMind subsidiary in London. The winner will receive $1 million, the largest prize in the history of Go.
Both of the prepackaged story lines are faulty, and to see why you need to look no further than the history of computer chess (see timeline, below).
In 1997, world chess champion Garry Kasparov stunningly lost a six-game match against IBM’s chess program, Deep Blue, after beating an earlier version of it the year before. Even in defeat, the media tried to spin the match as a victory for humanity, with comments about how computers still couldn’t write sonnets or hug a baby. “All true, and all beside the point,” The New York Times columnist Frank Rich wrote. The real story, he argued, was the thousands of little ways in which computers were changing our lives, and the challenge before us was not to defeat them but to find the right way to use them.
Even for the smaller world of chess, Rich got it right: Deep Blue never played another game. The real revolution arrived a few years later, when programs stronger than the world champion became available to anybody with a computer. That development has transformed chess. Every serious tournament player nowadays uses a computer to study and prepare. Some players use it as a crutch and forget how to think for themselves, whereas others use it to stimulate their own creative ideas. Like any tool, chess programs can be used for good or ill.
On the ill side: In 2010, French grandmaster Sebastien Feller was suspended by FIDE (the international chess federation) for receiving coded moves from a computer back home in France. At top-level tournaments, players now face airportlike security screening before their games. On the other hand, in correspondence chess, once played with postcards and now played on Web servers, it is now accepted that players will work with a computer or “chess engine.” The engine is like a high-performance automobile with a human driver at the wheel.
Compared with chess, the game of Go has been harder for artificial intelligence to crack. Its rules are much simpler than those of chess: Players take turns placing small black or white stones on a 19-by-19 grid and try to gain territory on the board by surrounding groups of one another’s stones. But the larger boards and astronomical number of possible arrangements of stones make the game much less amenable to exhaustive analysis. Whereas Deep Blue was able to substitute brute force for understanding, Go is too complicated for that approach. Top professional players are guided by an almost painterly sense of the shape of the stones and the interaction of all the parts of the board. “We pros have trouble defining how we make these decisions, let alone teaching a computer to do the same,” says Michael Redmond in Chiba, Japan, the first Western player to achieve the top rank in Go.
Computer Go took a step forward around 2005, when programmers started to use so-called Monte Carlo search trees. To evaluate a position on the board, the programs would simply play thousands of random games from that position, with no intelligence whatsoever. This randomized version of brute force search enabled them to compete with strong amateurs. “But they were still losing to the weakest pro, taking sizable handicaps,” Redmond says.
The next breakthrough, which remained invisible to most Go players until this year, came when Google and a rival team at Facebook started to apply deep neural networks to Go. In essence, this gives computers a way of learning by themselves from master games. Last October, in a match that was kept secret so the Google team could publish its work in Nature, AlphaGo trounced the European champion, Fan Hui, 5–0.
The win sent shock waves through the Go community. “When I saw the games, I was very surprised because the computer was playing like a human,” says Hajin Lee, secretary general of the International Go Federation in Tokyo. “If you had not told me, I could not tell from the moves which one was the computer.”
Then came a process of acceptance. “To me it was a relief,” says Frank Lantz, the director of the New York University Game Center in New York City and an enthusiastic Go amateur. “Go had become this mythic bulwark against artificial intelligence. People got used to this proverb: ‘Yes, computers can play chess but they can’t play Go.’ I don’t think it was ever true. We can finally put away the last self-delusion that there is some magical quality to Go that makes it intrinsically human.”
Now, on the eve of the match with Sedol, nobody knows what to expect. Experts have picked apart the games with Hui and found mistakes AlphaGo made in spite of its dominating victory. And there is a huge gap between Hui, rated No. 370 in the world on www.goratings.org, and Sedol, who is currently No. 4. Sedol himself seems confident; in a press conference he said he expected to win 5–0 or 4–1. “The critical point for me is to make sure I do not lose one,” Sedol said.
But AlphaGo has surely improved over the past 5 months; the question is by how much. “My sense is that the people making AlphaGo wouldn’t agree to the match if they didn’t have pretty high confidence,” Lantz says. He has bet $100 on AlphaGo with a colleague, and says “I think I’ve got slightly better than even odds.”
But as with chess, in a larger sense the outcome doesn’t matter. Computer programs will inevitably surpass humans sooner or later, most likely sooner. After they do, the big players like Google and Facebook will likely move on to other challenges, just as IBM did. They’re playing a larger game, the stakes of which are the pockets of every person on the planet. “Where they’re trying to get to is a [computerized personal assistant like] Siri that actually works,” says Dave Sullivan, a deep neural nets expert and CEO of Ersatz Labs in Pacifica, California. “That will be a game-changer.” In this larger game, mastering Go is a small but daring gambit, like stone No. 103 in a game of 300.
In games, human exceptionalism may hold out a while longer in poker. Computers are already close to playing mathematically optimal strategies in some versions of two-handed poker. But good human poker players possess an extra skill: the ability to read opponents’ weaknesses—their deviations from an optimal strategy—and exploit them. Computers can’t do that yet, but in principle it’s just the sort of thing a deep neural net ought to be able to master, says Nikolai Yakovenko of Twitter New York in New York City, a former poker pro who is working on artificial-intelligence poker software. Then another bastion will fall.
After the Fortune 500 companies have put away their Go sets, Go players will still have to live with their legacy: programs that will gradually improve beyond human abilities. It will surely force some changes in their genteel, tradition-bound world. Perhaps future Go masters will be taught differently. Perhaps some players will move to larger boards to postpone computer dominance, or find ways to team up with the computer as chess players have done. Perhaps others will renounce the goal of winning, and return to the classical Chinese concept of Go as an art form, along with calligraphy and music.
In the end, the computer is what we make of it. Says Hajin Lee, “Even if we lose this match, I think that it’s up to us to use the programs to our advantage.”
Dana Mackenzie is a freelance science writer and a U.S. Chess Federation Life Master in chess.