DeepMind’s win over Go: What does it mean for AI?

This helps to validate DeepMind’s machine learning techniques and the neural network construction behind it. Having proven their mettle in Go, the DeepMind team could now have the confidence (and funding) to tackle more complex AI challenges.

FE Bureau

ARTIFICIAL INTELLIGENCE (AI) just overcame a new hurdle: learning to play Go, a game considered thousands of times more complex than chess—well enough to beat the greatest human player at his own game. South Korean national Lee Se-dol, one of the world’s top Go players, won only one of the five matches against Google’s AlphaGo, missing out on the $1-million prize up for grabs in a recent ‘challenge’ held in Seoul.

AlphaGo, an AI system developed by Google DeepMind, just bested the best Go-playing human currently alive. This was not supposed to happen. At least, not for a while. Artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.

Go, a two-player game, is played on a board with 361 squares, with an unlimited supply of white and black game pieces called stones. Players arrange the stones on the board to create ‘territories’ by marking off parts of the boardgame, and can capture their opponent’s pieces by surrounding them. The player with the most territory wins.

Although the rules are relatively simple, the number of possible combinations is nearly infinite—there are more ways to arrange the pieces on the board than there are atoms in the universe.

The computer’s victory shocked Se-dol. But it’s also astounded experts, who thought that teaching computers to play Go well enough to beat a champion like Se-dol would take another decade. AlphaGo did it by studying millions of games, just as Google’s algorithms learn to identify photos by looking at millions of similar ones.

So far, so good. But why should one care about Google’s AI winning a boardgame? If you’re referring to Garry Kasparov’s chess matches against Deep Blue and the IBM Watson appearance against Ken Jennings on Jeopardy!, then you’d probably ask—haven’t we already done this a number of times? The answer is yes, these sort of publicity stunt competitions are a popular way to show the public how AI platforms have advanced.

Then, why is this one any different? Picking a chess move or querying a massive database of trivia is something that we can accomplish with brute-force computing power. Go, on the other hand, presents a far more complex challenge for a computer. The number of possible moves at the beginning of a Go game starts at around ‘2.08 x 10170’, and decreases from there. ‘Grokking’ those numbers and selecting the right move within a reasonable amount of time using conventional search-tree algorithms would be pretty much impossible for even our most powerful supercomputers.

If Go is so hard, how did Google solve this problem? DeepMind, through its AlphaGo software, doesn’t rely solely on the search-tree algorithms. Rather, it utilises its machine-learning chops, drawing on an archive of games against other human opponents and simulated games with itself to analyse the board and whittle the lists of possible moves down to a more manageable number.

Basically, faced with an extremely high-level challenge, it had to develop its own intuition—and one strong enough to flummox a human world champ.

Finally, what does that mean for AI? This is a huge victory for DeepMind and, overall, a major milestone for AI. Prior to the rise of AlphaGo, most people in the supercomputing and AI fields figured we were at least a good 10 years from being able to assemble a system capable of playing Go on a par with a top professional human player.

This also helps to validate DeepMind’s machine learning techniques and the neural network construction behind it. Having proven their mettle in Go, the DeepMind team could now have the confidence (and funding) to tackle more complex AI challenges.

AIArtificial IntelligenceDeepMindGogoogleNeural Networks
Comments (0)
Add Comment