The Artificial Intelligence That Solved Go

Nineteen years after Deep Blue won a chess match against a grand master, a Google team has created an A.I. that’s able to win against professional players of the ancient Chinese strategy game of Go.

Last October, a quiet milestone passed for humanity. Over five days, European champion Fan Hui played a formal match of the ancient Chinese strategy game Go against an artificial intelligence program created by Google’s DeepMind team. Go is often compared to chess, but in fact can be much more difficult to master. There are 361 points on the board where players may place their stones, and there are 10^170 (that means 170 zeros) possible configurations of a Go board. “It’s a very beautiful game with extremely simple rules that lead to profound complexity,” Demis Hassabis, a neuroscientist and DeepMind’s founder, said during a press conference. “In fact, Go is probably the most complex game ever devised by humans.”

A Go-playing A.I. has never beaten a professional human, whereas IBM’s Deep Blue won a chess match against a grand master in 1997. But last October, DeepMind’s program, named AlphaGo, won all five games against Hui. “It was really chilling to watch,” says Tanguy Chouard, an editor at the journal Nature who observed the game as part of the peer-review process. Nature published a paper yesterday describing AlphaGo’s clean sweep.

AlphaGo learns smart moves by observing millions of top human games and by playing against itself.

AlphaGo represents an important step forward in A.I., one that its creators say could show up within a year or two in products including smartphone assistants like Siri, recommendation engines like Netflix, and models that predict the effects of climate change. Although it’s just a Go player for now, AlphaGo’s creators built it to be a general problem-solver. Given a goal—Maximize points! Win Go!—AlphaGo is supposed to learn how to reach that target on its own.

In particular, AlphaGo was built to deal with “any task that has a lot of data that is unstructured and you want to find patterns in the data and then decide what to do,” Hassabis says. At every turn, Go has an enormous number of possible moves. Even with recent advances, computers don’t have the power to run all those possibilities. So, instead, AlphaGo learns smart moves by observing millions of top human games and by playing against itself. Then, when choosing a move during a game, it only searches within a narrower pool of possibilities that seem reasonable. It plays out those possibilities “in its imagination,” to see which is best, says David Silver, another member of the DeepMind team. The team members hope this approach will be generalizable to real-world problems later, Silver says.

The team’s immediate next goal is still focused on Go, however. World Go champion Lee Sedol of South Korea has agreed to play a five-game match against AlphaGo in Seoul in March. The team is tweaking AlphaGo to prepare for that, Hassabis says.

“I would put my money, probably, on the human,” the British Go Association’s treasurer, Toby Manning, who refereed the Hui-AlphaGo game, said in a video produced by Nature. “But I wouldn’t put lots of money on the human.”

Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.

Related Posts