On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
yes, it was said explicitly in the press conference after game 4 by hassabis. Just watch in on youtube, it is during the question block for Lee Sedol, but Hassabis interfers.
On March 15 2016 13:52 Draconicfire wrote: Yea, they mentioned in the post-game interview (game 4) that they only used amateur games from the Internet, Lee Sedol's games were not used.
Ok, cool.
Also I'm wondering if AlphaGo would play the exact same moves if a player used the same moves in two different games.
The policy and value networks should be the exact same results. But the monte carlo search tree is inherently random. There is always a chance that it randomly finds itself a better move at some point.
On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
From 6:09:37:
"AlphaGo was not trained specifically to Lee Sedol's play. We train it in a general way, and in fact, the games that we used--the human games that we used to start the training were actually strong amateur [therefore, non-professional] games from Internet Go servers. So in fact, there are no games of Lee Sedol in our database, training database. And then as you know already, the way AlphaGo got stronger was to play itself. So in fact, I think it's quite equal in terms of the information, we didn't train it on Lee Sedol's games."
On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
Considering that AlphaGo has played millions of games against itself and is able to meaningfully learn from each one, I don't think it's too ridiculous at all. I would be surprised if the total number of 9 dan level professional go games played in human history has reached a million.
On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
Considering that AlphaGo has played millions of games against itself and is able to meaningfully learn from each one, I don't think it's too ridiculous at all. I would be surprised if the total number of 9 dan level professional go games played in human history has reached a million.
I think it's surprising, too. Almost ridiculous in what this program has achieved. The developers mentioned in the game 5 pregame talks about the next level of machine learning. This is about building an AI with no human training. This seems far beyond what could be achieved right now.
The fact that Alphago is beating 9-dans while being trained from high amateurs... this seems to be a step in that direction already. How soon before no training is needed?
I would think it was already able to get to that level by itself, but having a seed of initial games to work with just cut down on some computing time. Parsing a specific game format to get pro games wasn't a goal of the DeepMind project, as they mentioned they just pulled games from an internet cafe program.
But given that they created a framework for meaningful learning with each new game, I would give odds on the computer to be more advanced than humanity once the games it is able to play starts pushing past the millions. A professional may play a couple thousand in a lifetime.
I just wanna say that I've learned a ton about Go from these past 5 games through all the English commentary. Really fascinating game. Props to the AlphaGo team and Lee Sedol for getting me interested!
On March 14 2016 11:51 evilfatsh1t wrote: if lee sedol wins the next game i would think that if you turned this into a best of 7 lee sedol would comeback to win 3-4. successive wins from lee sedol could mean he noticed some patterns within the program
I think it's likely for Lee Sedol to win game 5 as well. The match is inherently imbalanced as the commentators once said. Alphago has been trained on existing matches, including those from Lee Sedol. But this version of Alphago has never been shown before.
They mentioned in the press conference that all of the games that AlphaGo used to train were amateur matches pulled from online Go sites. No professional games, by Lee Sedol or otherwise, were included in its learning database.
can anyone confirm this? it would be absolutely ridiculous if something (human or bot) could get good enough to beat lee sedol without learning off any high level games. that idea is just ludicrous for any game
Considering that AlphaGo has played millions of games against itself and is able to meaningfully learn from each one, I don't think it's too ridiculous at all. I would be surprised if the total number of 9 dan level professional go games played in human history has reached a million.
I think it's surprising, too. Almost ridiculous in what this program has achieved. The developers mentioned in the game 5 pregame talks about the next level of machine learning. This is about building an AI with no human training. This seems far beyond what could be achieved right now.
The fact that Alphago is beating 9-dans while being trained from high amateurs... this seems to be a step in that direction already. How soon before no training is needed?
When you got people like elon musk saying that this basically moved AI research up by about 10 years.. that pretty much indicates how insane a jump this development has been.
Well none of the AI techniques they used is new. Neuro-networks have been around for quite some years now, they just found a way to make it work well in this particular case.
On March 16 2016 22:40 Glacierz wrote: Well none of the AI techniques they used is new. Neuro-networks have been around for quite some years now, they just found a way to make it work well in this particular case.
They have stated numerous times that the framework is going to be much more flexible than just this case and can be used in tons of other ways than just Go
On March 16 2016 22:40 Glacierz wrote: Well none of the AI techniques they used is new. Neuro-networks have been around for quite some years now, they just found a way to make it work well in this particular case.
Alphago is based on a project deepmind made that, as a single unchanging program, could learn to play and master the entire library of atari games using only the raw pixels as inputs.
I have a feeling some of the AI techniques they're using are new...
What that have that's new is a way of mixing policy and value networks with a Monte Carlo search tree. So yeah, they "just found a way" but it's novel.
Other Go software have used neural networks before. But the value network for evaluating board positions is very difficult. Even human players will not be able to tell the value of a given board state, or which player is really ahead.
Also using neural networks doesn't mean anything unless they are trained. The training methods they used, as well as the deep layers of abstraction in the network, result in very low errors when predicting moves. These networks are similar to what was developed for deep dream and other image recognition ai's (see post above) and I think it's pretty cool.
Edit: should add that they threw a boatload of hardware at this problem, too. 1,202 CPUs and 176 GPUs with 40 search threads in the distributed version.
On March 17 2016 09:41 meatpudding wrote: should add that they threw a boatload of hardware at this problem, too. 1,202 CPUs and 176 GPUs with 40 search threads in the distributed version.
Makes me wonder what we could make with more hardware, that's pretty chump change as far as computing power is concerned.
On March 17 2016 09:41 meatpudding wrote: should add that they threw a boatload of hardware at this problem, too. 1,202 CPUs and 176 GPUs with 40 search threads in the distributed version.
Makes me wonder what we could make with more hardware, that's pretty chump change as far as computing power is concerned.
Deepind stated that the performance got worse when using more hardware. Which is not that uncommon.