If you want to find out about how this was first exploited in chess, check out Matthew Lai's 'Giraffe' chess engine in september 2015, it's on ArXiv.
Go - AlphaGo (Google) vs Lee Sedol (world champ) - Page 4
Forum Index > General Games |
MyLovelyLurker
France756 Posts
If you want to find out about how this was first exploited in chess, check out Matthew Lai's 'Giraffe' chess engine in september 2015, it's on ArXiv. | ||
![]()
stuchiu
Fiddler's Green42661 Posts
| ||
trulojucreathrma.com
United States327 Posts
On March 12 2016 00:42 stuchiu wrote: It's a narrative thing. People want the computer to be about brute force. No. It is about context. It is not a technical term. Apparently, in cryptography, they call anything that's not exhaustive 'brute force'. In physical sciences, we call everything that's expensive computationally but simple to implement 'brute force'. We can code for years and get all the laws of physics right, then get an answer with little computational cost. Or we can implement something simple and just run it for a relatively long time. Of course, calculating something from first principle is impossible. You fail at that when you get to the level of a water molecule. | ||
Hesmyrr
Canada5776 Posts
Loooooooooooool at all the salt. | ||
PhoenixVoid
Canada32737 Posts
On March 12 2016 02:52 Hesmyrr wrote: http://media.daum.net/m/channel/view/media/20160311060203778 Loooooooooooool at all the salt. Hard to see the salt when it's in a language few people on this website can understand. | ||
Hesmyrr
Canada5776 Posts
He argues since AlphaGo is connected to internet, AI can basically overpower human with force of sheer numbers. The fact that AlphaGo is using cloud computing is directly against the principles of Baduk where it's supposed to be fair 1v1 with no external advice. "Google says AlphaGo do not use brute force algorithm, but it's receiving advice from another program that is using brute force. This is blatant cheating. Because AlphaGo can run thousands of AlphaGo at the same time over the internet, and can add more computers to its resource network when running out of time, it's impossible for it to lose by time unlike Lee Sedol", says this lawyer while adding that "Google offered million dollars but if Google wins, it will make much more higher profits due to being the frontrunner in AI technology." He concludes that Google should publicly apologize to Lee Sedol, Fan Hui, and entire Baduk community in general since the company is deceiving them using AlphaGo that does not truly understand how Baduk works nor can be considered a true AI. True gold is 1000+ subsequent netizen comment that unambiguously blames Google for being lying piece of shit, that what AlphaGo is doing is same thing as bringing textbook to the exam, and that it should be disconnected from the internet for rest of the match so it becomes Lee Sedol vs one laptop program. It's beautiful. | ||
Nakama
Germany584 Posts
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse.... | ||
Chocolate
United States2350 Posts
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse.... Well the important part is that you managed to be pretentious without actually elaborating on your point | ||
mierin
United States4943 Posts
| ||
Gorsameth
Netherlands21336 Posts
The difference is that as a computer AlphaGo can do that same process far faster and more extensive then a human could but the basic principle is the same. | ||
Nakama
Germany584 Posts
On March 12 2016 05:26 Chocolate wrote: Well the important part is that you managed to be pretentious without actually elaborating on your point Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =) And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking. I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using. | ||
Goolpsy
Denmark301 Posts
| ||
Glacierz
United States1244 Posts
http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html For those who are interested in AI and statistics, it is definitely worth reading. First and foremost, it's worth to point out that according to the paper, their setup only had 40 search threads, 1202 CPUs and 176 GPUs. I don't think this is even remotely close to a super computer in today's age. The computing power probably isn't even as strong as Deep Blue that was built decades ago. One of the greatest challenges in go is how to evaluate a certain board state. The number of potential moves are large and it is incredibly difficult to assign a score to any board position. The board gets easier and easier to evaluate the deeper you go down a tree (less moves possible towards the end game). The early/mid game where the decision tree has many branches makes any brute force algos unfeasible. In layman's terms, neuro networks allowed the program to develop a strong ability to predict moves by "guessing". This ability is reinforced by playing games against itself using a large library of recorded professional games, and assigning a probabilistic score to each simulated situation. This is how the program "learns" on its own. The use of bayesian conditional probability is what differentiates this program from other brute force algos found in the market. In a live game, it estimates only the moves with high values/payoffs, this reduces the number of branches on the search tree, and allows the program to analyze them into much greater depth. This ultimately results in board values that are much more accurate. I think this process is very similar to what a human would do, which is only to focus on a handful of key possibilities. A brute force approach would have been to analyze all possible moves to a much shallower depth, resulting in less reliable value networks. The key here is that the program remembers its past "training" games when it plays, so it spends much less time evaluating situations that it has seen before. The value network was trained for 30 million mini-batches of 32 positions, using 50 GPUs, for one week. Alphago incorporated so many modern AI techniques, and the fact that it is working this well is truly revolutionary. | ||
ZigguratOfUr
Iraq16955 Posts
On March 12 2016 06:24 Nakama wrote: Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =) And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking. I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using. Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context. | ||
mierin
United States4943 Posts
On March 12 2016 07:29 ZigguratOfUr wrote: Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context. I feel the same way, but also that makes me respect human brains even more. Everything in Go happens in one dimension, even though it's on a 2-d board. Stones can only move in a 1-d fashion. In Starcraft, every unit on the map is represented in 2-d and has those degrees of freedom. So what if a computer can beat us at Go? It's a revolution, but I don't know enough about computers unfortunately to say how much harder it is for this type of AI to consider moves in more than 1 dimension. | ||
Petrosidius
United States10 Posts
| ||
angrybacon
United States98 Posts
On March 12 2016 09:30 Petrosidius wrote: How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go. You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization. This process only applies when the dimensions are discrete and finite. | ||
Nakama
Germany584 Posts
On March 12 2016 07:29 ZigguratOfUr wrote: Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context. My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense. | ||
Petrosidius
United States10 Posts
On March 12 2016 10:11 angrybacon wrote: You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization. This process only applies when the dimensions are discrete and finite. Every starcraft map is also finite and discrete. Maybe much bigger than a Go board but it's finite and there is a minimum x and y distance. | ||
PhoenixVoid
Canada32737 Posts
http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3 "'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco. ... "The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently. Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move. "You have to keep track of things happening off the screen," Dean says. It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence. Though I wouldn't take it as an absolute promise until we get confirmation. | ||
| ||