For those questioning the "brute force part" i just checked the wikipedia to get some numbers.
"November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a personal computer containing two Intel Core 2 Duo CPUs, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies in the middlegame thanks to heuristics"
Deep Fritz, in 2006, was superior to deep blue (from 1996!) and ran in a core duo. 8 million positions per second is already completely out of human reach. Even in chess heuristics was the key step and in Go, more is required. Anyway a potato can evaluate millions of moves while the human is racing against time to check one by one.
I don't even know why people make such a big deal. Its not a fair match, and i don't think its supposed to be. Just don't be fools and swallow the idea that its a battle of "minds" or that Google is actually challenging the man. The computer is not even emulating the human thought process. They know they will win, what is done is to showcase what they are capable of to the general public. But discussing the fairness of a match between a 1202 CPUs computer using cloud computing and a human makes no sense.
Your cellphone can brute force you in a simple arithimetics contest while running youtube, the point is that despite that nobody ever did it with GO before because "the technology just wasn't there yet".You cannot calculate to win at go. Simple calculation is not enough for a human OR computer (at least for now).
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
people questioning the use of the words "brute force" actually know what "brute force" means in computer science. how alphago got very strong is due to the hyperbolic time chamber ability to train against itself and other AIs/emulated players very quickly in a short amount of time. the computing power that alphago has been given over time isnt sufficient enough to make the program one of the best in the world by simply stupidly calculating all possible moves on the board.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.
If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.
If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
That might be true that it is "industry specific" term but it is still frustrating to see people calling Alphago bruteforce when that is just simply not doing it justice because it is just not "brute force" in a computer science context (which this is, since we're talking about AI/ML).
In a sense, AlphaGo does have a "decision making process" since it is deciding that some moves will give a higher probability of victory than others. Alphago is basically doing what Lee Sedol's brain is doing but on a far more precise level, but not to the point of brute force, since that would mean all possible variations which it just isn't doing, so Alphago's algorithm is far more intelligent than a simple "brute force" mechanism.
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
On March 12 2016 03:47 Nakama wrote: Funny how some ppl in here think a machine can "play" GO...... But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Well the important part is that you managed to be pretentious without actually elaborating on your point
Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)
And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.
I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
What makes the proccess of AlphaGO so diffrent from humans?
On March 12 2016 14:10 Wegandi wrote: Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available. Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.
what i'd love to see is if the AI can find different build orders to try and create new strategies, ie like the fast corsair strategies in pvz
Definitely, and this would probably end up happening too. Noticed a comment on the Reddit thread about AlphaGo's 3rd victory, that sums this up well i think.
Just remember, this is not the end of Go. As it was in chess, computers will gradually go from our nemesis to part of Go culture, assisting us and enhancing the game for human play.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.
But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.
Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.
...
"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.
Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.
"You have to keep track of things happening off the screen," Dean says.
It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.
Though I wouldn't take it as an absolute promise until we get confirmation.
very amazed that jeff dean of all people is talking about starcraft as the next target.
google trying to destroy korean esports???
Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.
Who am i kidding. Even EffOrt or Bisu would be enough
to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.
But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.
Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
google would have to work something out with blizzard to do it legally anyway, but if they really wanted to crack open BW to suit their needs they could certainly do it.
and lastly, google is making a huge advertisement for the power of cloud computing. moore's law will not hold up, at least for now. however, companies have found it lucrative to sell processing power through cloud computing. perhaps one day having big enough server farms and more efficient parallelism will enable alphago's improvements to be available to the average person.