
Go - AlphaGo (Google) vs Lee Sedol (world champ) - Page 2
Forum Index > General Games |
Hesmyrr
Canada5776 Posts
![]() | ||
Andre
Slovenia3515 Posts
On March 10 2016 04:58 nepeta wrote: Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar. Really? That's quite fascinating. What level does the current AIs play at? Years ago when I watched I think there was an AI that got to D+ by cheesing. But then again iiccup rankings aren't what they used to be, so I assume its easier now. As for Go, I only watched a little because I don't understand the game much. But is it true that some of the moves AlphaGo did in the midgame were considered 'bad' yet they were instrumental in winning him the game later on? | ||
LaNague
Germany9118 Posts
| ||
bardtown
England2313 Posts
| ||
nimbim
Germany983 Posts
If Google poured ressources into developing a Starcraft AI, I think it can be as successful as AlphaGo, but it would still take some time to pull that off. | ||
![]()
stuchiu
Fiddler's Green42661 Posts
| ||
trulojucreathrma.com
United States327 Posts
On March 10 2016 04:58 nepeta wrote: Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar. It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0). You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it. | ||
chocorush
694 Posts
On March 11 2016 04:07 trulojucreathrma.com wrote: It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0). You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it. I think computers will get pretty confused by all the click spamming that pros do just to stay warmed up. There's just too much noise to reasonably understand what pros are doing just by mining their inputs. | ||
trulojucreathrma.com
United States327 Posts
If you take tons of programmers and set up some good AI, you don't need to do crazy calculations. | ||
sertas
Sweden878 Posts
| ||
PhoenixVoid
Canada32737 Posts
| ||
{CC}StealthBlue
United States41117 Posts
Upon learning that Google Deepmind, Alphabet’s artificial intelligence wing, won the first of five matches against the 33-year-old grandmaster of the ancient Chinese game Go with its AlphaGo AI program, Musk sent his congratulations via Twitter to the A.I. company, of which he was once an early investor before Google bought it back in 2014. Go champion Lee Sedol predicted he’d sweep the machine in a 5-0 in a Tyson-style knockout, but had to resign the first round, following a three and a half hour stand-off. There are four more rounds to go, but this is the first time a computer program has ever been able to best such a skilled player in Go, a game conceived roughly 3,000 years ago and considered much harder to master than chess. If this comes off as a sign of the impending robot apocalypse, don’t fret, Musk is worried about this too. While the billionaire tech company mogul was quick to give praise, tweeting, “Experts in the field thought AI was 10 years away from achieving this,” he’s also highly concerned about the pitfalls of A.I. and the dystopian future it could breed. Source | ||
Yacobs
United States846 Posts
On March 11 2016 05:05 PhoenixVoid wrote: Alpha Go pulled off a move that hasn't been tried yet despite hundreds of years of progress in the game. Quite fascinating how exponentially AI has been developing from a game people believed to be incredibly difficult to beat a medium tier player to now stumping a 9th dan pro. Details please? | ||
mierin
United States4943 Posts
| ||
ejozl
Denmark3326 Posts
| ||
chocorush
694 Posts
| ||
Uvantak
Uruguay1381 Posts
On March 11 2016 09:02 ejozl wrote: How big is the advantage of being the starting player in Go? Pretty big, but Go is no Chess, here you are not really playing to kill enemy pieces, but to control terrain (points), at the end of the game once both players decide the game is over then they count their points, then the white gets 7.5 free extra points which are there to address the inherent advantage of black doing the first move (the extra points are called komi). | ||
trulojucreathrma.com
United States327 Posts
On March 11 2016 04:52 sertas wrote: why are people saying that computer uses brute force? its not even near using bruteforce in chess or go. It uses very smart parameters for deciding on what move to make making its calculations much more accurate then bruteforce. Computers would still get absolutely destroyed in chess if they were using bruteforce but they dont, I actually published a paper where I used Monte Carlo to simulate molecular modeling. I think I know what it is. In layman terms it can be called brute forcing. Especially if you compare human thinking to computer thinking. Imagine a human trying to run a Monte Carlo algorithm. You'd go insane. I can kind of see if the possibility space or Go is really that big, humans prefer a small segment of that possibility space. If the AI finds a possibility space alien to human players, but that is solid in itself, by accident or by design, the human player will suddenly not have their usual game sense. I don't know if that is how Go can work, but I can see how that may be possible. | ||
raNazUra
United States10 Posts
Short summary of how AlphaGo works: It learns a deep neural net that takes as input board states and outputs a predicted move, which it trains using tens of thousands of recorded professional Go games. It actually learns two of these, one bigger and slower, but more accurate, and one that is faster that it can use for the playouts of MCTS. With these (and a bit of retraining to incentivize winning rather than accurate prediction), it then plays itself millions of times to generate a huge amount of data mapping board states to wins or losses, then learns another deep neural net that predicts the value of a board state. These two networks were respectively called the "policy net" and the "value net" by the DeepMind guy on the interview yesterday. All of that is trained offline before a game. Using those two networks, AlphaGo does game tree search (MCTS) during a game to decide the best move. But it prunes the game tree using its policy net, so it only explores moves that are likely moves for an expert to play, based on what it has learned. That's why it's not really brute force in the way the term is usually applied, because it's only thinking about reasonable moves. The final move selected is a balance between the results of the game tree search and the evaluation of the position after the move by the value net. There's clearly a lot more complications that this, but that's the base approach. At a talk at a recent AI conference I was at, the CEO of DeepMind said that they actually evaluated less game states in their games against Fan Hui than Deep Blue did, and that was on 20-year old technology. That means their search is most definitely pretty intelligent, and not brute forcing the game. | ||
trulojucreathrma.com
United States327 Posts
Btw, brute force doesn't mean it is stupid. As long as you randomly iterate, I think you can call it brute forcing. | ||
| ||