I'm not sure how many of you are aware of the old Chinese game "Go", and of the current Bo5 match going on between Google's deep learning AI machine AlphaGo versus the world's champion, Lee Sedol hailing from South Korea.
Late last year, AlphaGo was able to knock off Europe's champion 5 to nil. So what's the big deal this time around?
Simply put, the level of competition (against AlphaGo). Europe's champion, Fan Hui, is a level 2-dan player. Lee Sedol is 9-dan. The statistical probability of a 9-dan player beating a 2-dan player is over 95%.
Last night, the first game was played, with AlphaGo taking the early lead 1-0. This marks an amazing point of progress for AI, and more specifically, deep learning capabilities. The reason AI hasn't been able to handily beat professional Go players until very recently is simply because of the complexity of the game. The possible permutations in Go, exceeds that of the number of atoms in the universe (and for marked effect, by several orders of magnitude at that!). If you're a visual person, Go has about 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible permutations. So it's pretty clear, that computers don't have the capability to win these games through brute-force computations. There simply doesn't exist the computational power to do so, especially considering that these matches are played with an allotted total time for each player, similar to how professional Chess matches are played.
Google's AlphaGo has gotten to this point by deep learning. It's able to view recordings of professional matches and learn. It can also not only learn from the mistakes and moves of its' own games played, but it also has the huge advantage that it can play against itself at speeds incomprehensible for us simple humans
Anyways, I thought some of you might find it interesting. I don't play Go much myself, and I don't have the time to watch the live streams of the matches. But I think it's fascinating because at some point, there may not exist a game out there that some form of AI can't beat us at.
Here's a few extras (including live stream link for those of you interested):
SKynet begins with go? Seriously though, I was impressed with deep blue all those years ago and am equally impressed now (Go I think would be much harder to program than a chess AI, as you said with the near infinite possible permutations).
The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one: 1) A network for predicting good potential moves in a present situation 2) A network for evaluating a board position
These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
On March 10 2016 04:26 RoyGBiv_13 wrote: The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one: 1) A network for predicting good potential moves in a present situation 2) A network for evaluating a board position
These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.
The learning of the various networks is a fun part. First supervised learning to get a network that predicts correctly the next "human" move on a set of games (they let it train on a games database until it got ~60% of the time the human move), then get several copies of the network to play against itself in reinforcement learning on the whole game (when the game is won, the network changes for that game are valued more). The best network after ... lots of games was elected to predict the potential moves.
The second network is then used on the complete set, trained until it accurately provides an outcome (color winning) from a given position.
Combine the two with a standard Monte Carlo and you get a very good engine.
The only surprising part for me is how well the convergence on the two networks seems to have worked. In my days the networks tended to spend a few days learning to end up with barely better results than randomness (except on a select few problems). Then again, we worked with 3 layers and 50 neurones total (not including the programmer's).
For those looking for a (English) video live analysis of the game adressed to a more Go educated crowd: The same channel will broadcast as well in the future. But as I said, you should have a decent understanding of Go to be able to follow that broadcast, while the regular stream is more adressed at beginners. For those just looking for a quick summary and analysis of the game in written form, you can look here: https://gogameguru.com/alphago-defeats-lee-sedol-game-1/ It should be noted, that this summary is done by another 9p, but is actually summarizing a lot of different korean media/professional opinions on the match.
On March 10 2016 06:51 Yurie wrote: Sad the first game recording is so low quality, audio choppy and cuts to the wrong camera all the time.
Check out American Go Association's stream vod: https://www.youtube.com/watch?v=6ZugVil2v4w It starts an hour into the match, commentary is quicker and less oriented towards beginners.
Interesting thing about that xkcd comic: in Arimaa a bot beat the best human players just last year (in April). So the field of AI in games appears to be advancing rapidly.
I'm personally excited by the possibility of AI becoming superhumanly good at other tasks, as well as games. Like how driverless cars + Show Spoiler +
I hope the term auto catches on
will be better than humans, but for medicine, or scientific research.
Close match still if I understood correctly. Some poor quality news website says that Lee lost from only 2 points after a 5h game. Crazy shit. I believe that if Lee doesn't win next one it will be a clean sweep 5-0. Otherwise 4-1, but I don't see him defeating the AI twice. Damn.
I wonder if they're going to continue developing AlphaGo. Maybe it was only a proof of concept type product? For instance, chess engines are not that interesting any more for research programs. I kinda hope they won't, I felt like chess was more mysterious before computers.