|
On November 09 2016 04:59 LegalLord wrote:Show nested quote +On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination.
|
United Kingdom13775 Posts
On November 09 2016 05:04 Derpmallow wrote:Show nested quote +On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating.
Reasonable human-range AI limits on APM make perfect sense.
|
On November 09 2016 04:59 LegalLord wrote:Show nested quote +On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. They said they are going to give it the same game interface we have including simulating human-like unit selection with possibly only one exception of giving the AI resource numbers directly.
|
United Kingdom13775 Posts
On November 09 2016 05:07 aQuaSC wrote:Show nested quote +On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. They said they are going to give it the same game interface we have including simulating human-like unit selection with possibly only one exception of giving the AI resource numbers directly. Yeah, I know, and that's quite fair. But Derpmallow is asking why it matters, and this is why.
|
On November 09 2016 05:07 LegalLord wrote:Show nested quote +On November 09 2016 05:04 Derpmallow wrote:On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating. Reasonable human-range AI limits on APM make perfect sense. I never said that APM limitation doesn't make sense for the showmatch, I just said that freaking out about all of that is silly. Deepmind's not going to make a program that is only impressive because it can micro perfectly, that doesn't give them any information to work off of for the stuff they actually care about. I am glad that they've thought about appropriate restrictions for the AI in order to make the showmatches fun to watch, but my core focus here is very similar to Deepmind themselves- seeing how complex they can make the cognition and decisionmaking for this AI.
|
United Kingdom13775 Posts
On November 09 2016 05:14 Derpmallow wrote:Show nested quote +On November 09 2016 05:07 LegalLord wrote:On November 09 2016 05:04 Derpmallow wrote:On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating. Reasonable human-range AI limits on APM make perfect sense. I never said that APM limitation doesn't make sense for the showmatch, I just said that freaking out about all of that is silly. Deepmind's not going to make a program that is only impressive because it can micro perfectly, that doesn't give them any information to work off of for the stuff they actually care about. I am glad that they've thought about appropriate restrictions for the AI in order to make the showmatches fun to watch, but my core focus here is very similar to Deepmind themselves- seeing how complex they can make the cognition and decisionmaking for this AI. Again, the entire point of all this is to demonstrate successful intelligent play by the AI that is comparable to that which humans have. To that end you have to have the AI succeed in a game against a human at a high level in a fair match. And in that sense the idea of a "fair match" is very important to rigorously define.
|
On November 09 2016 05:17 LegalLord wrote:Show nested quote +On November 09 2016 05:14 Derpmallow wrote:On November 09 2016 05:07 LegalLord wrote:On November 09 2016 05:04 Derpmallow wrote:On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating. Reasonable human-range AI limits on APM make perfect sense. I never said that APM limitation doesn't make sense for the showmatch, I just said that freaking out about all of that is silly. Deepmind's not going to make a program that is only impressive because it can micro perfectly, that doesn't give them any information to work off of for the stuff they actually care about. I am glad that they've thought about appropriate restrictions for the AI in order to make the showmatches fun to watch, but my core focus here is very similar to Deepmind themselves- seeing how complex they can make the cognition and decisionmaking for this AI. Again, the entire point of all this is to demonstrate successful intelligent play by the AI that is comparable to that which humans have. To that end you have to have the AI succeed in a game against a human at a high level in a fair match. And in that sense the idea of a "fair match" is very important to rigorously define. I disagree with that being the core point of this project. This is a way for them to develop algorithms and techniques for using AI in a realtime environment with imperfect information. The showmatch is for publicity, but as far as the actual success of this project the AI never has to play against a single human being to be successful for Deepmind's goals. It's really cool that they're going to do so, but it is in no way necessary for the development of their AI projects. And thus, what game they're playing or how human-like they're playing it really doesn't matter, but I fully understand that for the sake of having the showmatch be engaging it's important for the AI to have extreme limitations and I'm sure they'll do a good job on that front.
|
United Kingdom13775 Posts
On November 09 2016 05:27 Derpmallow wrote:Show nested quote +On November 09 2016 05:17 LegalLord wrote:On November 09 2016 05:14 Derpmallow wrote:On November 09 2016 05:07 LegalLord wrote:On November 09 2016 05:04 Derpmallow wrote:On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating. Reasonable human-range AI limits on APM make perfect sense. I never said that APM limitation doesn't make sense for the showmatch, I just said that freaking out about all of that is silly. Deepmind's not going to make a program that is only impressive because it can micro perfectly, that doesn't give them any information to work off of for the stuff they actually care about. I am glad that they've thought about appropriate restrictions for the AI in order to make the showmatches fun to watch, but my core focus here is very similar to Deepmind themselves- seeing how complex they can make the cognition and decisionmaking for this AI. Again, the entire point of all this is to demonstrate successful intelligent play by the AI that is comparable to that which humans have. To that end you have to have the AI succeed in a game against a human at a high level in a fair match. And in that sense the idea of a "fair match" is very important to rigorously define. I disagree with that being the core point of this project. This is a way for them to develop algorithms and techniques for using AI in a realtime environment with imperfect information. The showmatch is for publicity, but as far as the actual success of this project the AI never has to play against a single human being to be successful for Deepmind's goals. It's really cool that they're going to do so, but it is in no way necessary for the development of their AI projects. And thus, what game they're playing or how human-like they're playing it really doesn't matter, but I fully understand that for the sake of having the showmatch be engaging it's important for the AI to have extreme limitations and I'm sure they'll do a good job on that front. If they want to work on a simplified project they should. If the ultimate goal is to play humans it should have a similar physical constraint as a human.
Although, I will add one new perceived fault of SC2 over BW. I was never big into SC2 and I think my posting habits make it go without saying that I was mostly from BW. But as far as I can tell, custom maps never gained much traction in SC2, which means that a lot of custom scenarios (e.g. simplifications that you can test components of the AI on) will be less available. I know that in its brief foray into BW AI, the Facebook AI team used micro maps to train their system. Will an SC2 API have that possibility? Will it have it soon after release? I'm skeptical.
|
On November 09 2016 05:34 LegalLord wrote:Show nested quote +On November 09 2016 05:27 Derpmallow wrote:On November 09 2016 05:17 LegalLord wrote:On November 09 2016 05:14 Derpmallow wrote:On November 09 2016 05:07 LegalLord wrote:On November 09 2016 05:04 Derpmallow wrote:On November 09 2016 04:59 LegalLord wrote:On November 09 2016 04:57 Derpmallow wrote: The amount of discussion here of APM and how to make the showmatch between man and machine fair and all of that is sort of baffling. This shouldn't need to be said, but I'll say it anyways: Deepmind's goal is not to make the perfect Starcraft bot. Their goal is to find new algorithms and techniques to apply to the quest for Artificial General Intelligence - that is, a robot that can fight you in Starcraft, do your taxes and discuss your marital issues without changing the underlying code itself.
This means that worries of it being too mechanically powerful for the showmatch are silly because, again, that's not their goal. They don't need to put millions of dollars and thousands of manhours into making an AI that can microbot a pro to death. Their task is to find a clever way of beating people since that helps their central goal. And, yes, this also means BW vs SC2 talk is silly because they aren't trying to prove that they can create the best Starcraft bot ever. The game is purely a testing ground to see how effective they can make the AI in thinking and planning and executing. There are a lot of cool AI projects for Brood War, yes, but they are not going to base their AI off those because, again, they're going for general. Neural Network AI for Brood War, while super cool, are very very narrow in what they can do. They play Starcraft. Deepmind wants to do a lot more than play Starcraft and other games. The entire issue of speed is one of trivializing the problem. It's like building a football/soccer robot that is just a giant automated tank that will crush its human opposition. Yes it will win, but it didn't do so by virtue of its AI but rather by virtue of an interface that gives it an unfair advantage. Yes, but the reality is that the showmatch is just that - a showmatch. I'm pretty sure they're aware that winning purely through mechanics is boring, but they're not going to spend hours and hours figuring out the exact amount of actions the average human is capable of in a period of time in order to make sure the games are perfectly fair. Odds are, they're going to do the same thing they did with Go and focus more on the AI playing against itself to see how effective the learning system is. Would it be nice if it became the most adept player of Starcraft 2, artificial or otherwise? Yeah, that'd be really cool. But that's not why they're doing this. This is about the journey, not the destination. Winning because you have 30000 APM is akin to winning because you have tanks against humans. Asymmetrical warfare is cheating. Reasonable human-range AI limits on APM make perfect sense. I never said that APM limitation doesn't make sense for the showmatch, I just said that freaking out about all of that is silly. Deepmind's not going to make a program that is only impressive because it can micro perfectly, that doesn't give them any information to work off of for the stuff they actually care about. I am glad that they've thought about appropriate restrictions for the AI in order to make the showmatches fun to watch, but my core focus here is very similar to Deepmind themselves- seeing how complex they can make the cognition and decisionmaking for this AI. Again, the entire point of all this is to demonstrate successful intelligent play by the AI that is comparable to that which humans have. To that end you have to have the AI succeed in a game against a human at a high level in a fair match. And in that sense the idea of a "fair match" is very important to rigorously define. I disagree with that being the core point of this project. This is a way for them to develop algorithms and techniques for using AI in a realtime environment with imperfect information. The showmatch is for publicity, but as far as the actual success of this project the AI never has to play against a single human being to be successful for Deepmind's goals. It's really cool that they're going to do so, but it is in no way necessary for the development of their AI projects. And thus, what game they're playing or how human-like they're playing it really doesn't matter, but I fully understand that for the sake of having the showmatch be engaging it's important for the AI to have extreme limitations and I'm sure they'll do a good job on that front. If they want to work on a simplified project they should. If the ultimate goal is to play humans it should have a similar physical constraint as a human.Although, I will add one new perceived fault of SC2 over BW. I was never big into SC2 and I think my posting habits make it go without saying that I was mostly from BW. But as far as I can tell, custom maps never gained much traction in SC2, which means that a lot of custom scenarios (e.g. simplifications that you can test components of the AI on) will be less available. I know that in its brief foray into BW AI, the Facebook AI team used micro maps to train their system. Will an SC2 API have that possibility? Will it have it soon after release? I'm skeptical. Like the person you quoted said, their ultimate goal is not to play good SC against humans, but to work towards general AI. I don't see how this is a "simplified project".
No clue why they're choosing SC2 over BW. I don't think this is PR for DeepMind/Google. I rather think this is PR for Blizzard. Dammit, DeepMind has the technology to cut global energy consumption by 30%. They really, really don't need any PR at this point.
And really, who cares? I'm excited to see what the geniuses behind DeepMind will deliver. We're alive and awake to witness one of the most important endeavours humankind has ever taken, the quest for general AI. And you're salty about them choosing SC2 over BW? Ah come on...
And yet, I'll agree with you. BW would have been a more reasonable choice, but not by a large margin. So I'm indifferent towards this. I'm looking forward to what DeepMind can do here.
|
<3 @mendelfist and bottle
Mendelfist, I like your abstract reasoning, but I think you just misunderstand what bottle has been trying to say. He is talking about chunking the gamestate and inputs into something reasonably learnable. You are talking about the structure of the AI once the chunking has been figured out. He is saying the problem is very hard because the task of chunking the game (which is essentially given for free by comparison in games like Chess, Go, and Billiards as he has explained) is very not straightforward, and probably needs a good deal of clever script/design and NN perception layer voodoo. You are saying that using NN (and/or other methods) would be straightforwardly applicable given proper chunking, which is mostly true -- I see there is still a large open question about what a policy even looks like for SC2 given dexterity-limited input.
As to the broader thought process about how hard SC2 is compared to Go, I think your "modest effort" argument is interesting but takes the wrong conclusion. Yes you can script great micro decent build order moderately okay lategame bots, but they would never beat a masters level player in a bo51 series. Probably even down to diamond or plat could handle them once they learn their setup and how to exploit it. The ultimate difficulty in SC2 AI is creating a decision maker than can adjust to enemy adaptation, not win by brute force with a killer strat and epic micro, in spite of how far those can get you. That readjusting decision making relies intimately on chunking the game state both for learning and for runtime operating; they influence each other deeply in a game like SC2, and the added wrinkle of limited inputs really makes a doozy of a problem.
|
On November 09 2016 06:28 EatThePath wrote: <3 @mendelfist and bottle
Mendelfist, I like your abstract reasoning, but I think you just misunderstand what bottle has been trying to say. He is talking about chunking the gamestate and inputs into something reasonably learnable. You are talking about the structure of the AI once the chunking has been figured out. He is saying the problem is very hard because the task of chunking the game (which is essentially given for free by comparison in games like Chess, Go, and Billiards as he has explained) is very not straightforward, and probably needs a good deal of clever script/design and NN perception layer voodoo. You are saying that using NN (and/or other methods) would be straightforwardly applicable given proper chunking, which is mostly true -- I see there is still a large open question about what a policy even looks like for SC2 given dexterity-limited input.
As to the broader thought process about how hard SC2 is compared to Go, I think your "modest effort" argument is interesting but takes the wrong conclusion. Yes you can script great micro decent build order moderately okay lategame bots, but they would never beat a masters level player in a bo51 series. Probably even down to diamond or plat could handle them once they learn their setup and how to exploit it. The ultimate difficulty in SC2 AI is creating a decision maker than can adjust to enemy adaptation, not win by brute force with a killer strat and epic micro, in spite of how far those can get you. That readjusting decision making relies intimately on chunking the game state both for learning and for runtime operating; they influence each other deeply in a game like SC2, and the added wrinkle of limited inputs really makes a doozy of a problem.
Neither is really true. Given the "voodoo magic" that deep learning can apply to chunking images, it's just a matter of scale. I don't think chunking SC2 is any harder than chunking images, and deep learning methods are better than anything else we have come up with. It just sucks that we have no clue what they are doing. So from an "explaining AI" point of view, deep learning is a disaster. But from getting good results, it's marvellous. Oh, you also need to not care about lower bounds, because absolutely nothing is provable (or at least insofar as we know right now) about NN as a function approximator (past the simplest of perceptron networks). It's quite possible that a very very low level detailed description of what is known about the gamestate every X milliseconds is a good input for a deep learning algorithm, just as the brightness level of every pixel is a good input for deep learning applied to CV.
|
On November 09 2016 23:00 Acrofales wrote:Show nested quote +On November 09 2016 06:28 EatThePath wrote: <3 @mendelfist and bottle
Mendelfist, I like your abstract reasoning, but I think you just misunderstand what bottle has been trying to say. He is talking about chunking the gamestate and inputs into something reasonably learnable. You are talking about the structure of the AI once the chunking has been figured out. He is saying the problem is very hard because the task of chunking the game (which is essentially given for free by comparison in games like Chess, Go, and Billiards as he has explained) is very not straightforward, and probably needs a good deal of clever script/design and NN perception layer voodoo. You are saying that using NN (and/or other methods) would be straightforwardly applicable given proper chunking, which is mostly true -- I see there is still a large open question about what a policy even looks like for SC2 given dexterity-limited input.
As to the broader thought process about how hard SC2 is compared to Go, I think your "modest effort" argument is interesting but takes the wrong conclusion. Yes you can script great micro decent build order moderately okay lategame bots, but they would never beat a masters level player in a bo51 series. Probably even down to diamond or plat could handle them once they learn their setup and how to exploit it. The ultimate difficulty in SC2 AI is creating a decision maker than can adjust to enemy adaptation, not win by brute force with a killer strat and epic micro, in spite of how far those can get you. That readjusting decision making relies intimately on chunking the game state both for learning and for runtime operating; they influence each other deeply in a game like SC2, and the added wrinkle of limited inputs really makes a doozy of a problem. Neither is really true. Given the "voodoo magic" that deep learning can apply to chunking images, it's just a matter of scale. I don't think chunking SC2 is any harder than chunking images, and deep learning methods are better than anything else we have come up with. It just sucks that we have no clue what they are doing. So from an "explaining AI" point of view, deep learning is a disaster. But from getting good results, it's marvellous. Oh, you also need to not care about lower bounds, because absolutely nothing is provable (or at least insofar as we know right now) about NN as a function approximator (past the simplest of perceptron networks). It's quite possible that a very very low level detailed description of what is known about the gamestate every X milliseconds is a good input for a deep learning algorithm, just as the brightness level of every pixel is a good input for deep learning applied to CV. It's quite possible but my point is that it's totally unknown right now and I assume figuring out what works well will be nontrivial. Image parsing was studied for decades before largescale parallel processing in the application of convolutional NN was thrown at it. And image perception (from what I understand) is based on fairly rudimentary low level layers that correspond quite naturally to geometric interpretation (edge detection/vertical/horizontal/lightVSdark etc). I think the inscrutable part of deep learning comes in the piles of abstraction and network scale. The ultimate outcome of an effective chunking perception scheme will appear not-that-hard, but I don't assume finding it will be straightforward.
imo the perception design is the first crux of the challenge, and the difficulty is you can't test that very well because it's such a convoluted road (no pun intended) to a playable AI. But who knows, maybe obvious chunking schemes will work well (even self learned maybe).
Btw by voodoo I meant the art/science of human designed CNN structure, but of course NN are pretty voodoo-seeming in general. XD
|
On November 09 2016 23:00 Acrofales wrote:Show nested quote +On November 09 2016 06:28 EatThePath wrote: <3 @mendelfist and bottle
Mendelfist, I like your abstract reasoning, but I think you just misunderstand what bottle has been trying to say. He is talking about chunking the gamestate and inputs into something reasonably learnable. You are talking about the structure of the AI once the chunking has been figured out. He is saying the problem is very hard because the task of chunking the game (which is essentially given for free by comparison in games like Chess, Go, and Billiards as he has explained) is very not straightforward, and probably needs a good deal of clever script/design and NN perception layer voodoo. You are saying that using NN (and/or other methods) would be straightforwardly applicable given proper chunking, which is mostly true -- I see there is still a large open question about what a policy even looks like for SC2 given dexterity-limited input.
As to the broader thought process about how hard SC2 is compared to Go, I think your "modest effort" argument is interesting but takes the wrong conclusion. Yes you can script great micro decent build order moderately okay lategame bots, but they would never beat a masters level player in a bo51 series. Probably even down to diamond or plat could handle them once they learn their setup and how to exploit it. The ultimate difficulty in SC2 AI is creating a decision maker than can adjust to enemy adaptation, not win by brute force with a killer strat and epic micro, in spite of how far those can get you. That readjusting decision making relies intimately on chunking the game state both for learning and for runtime operating; they influence each other deeply in a game like SC2, and the added wrinkle of limited inputs really makes a doozy of a problem. Neither is really true. Given the "voodoo magic" that deep learning can apply to chunking images, it's just a matter of scale. I don't think chunking SC2 is any harder than chunking images, and deep learning methods are better than anything else we have come up with. It just sucks that we have no clue what they are doing. So from an "explaining AI" point of view, deep learning is a disaster. But from getting good results, it's marvellous. Oh, you also need to not care about lower bounds, because absolutely nothing is provable (or at least insofar as we know right now) about NN as a function approximator (past the simplest of perceptron networks). It's quite possible that a very very low level detailed description of what is known about the gamestate every X milliseconds is a good input for a deep learning algorithm, just as the brightness level of every pixel is a good input for deep learning applied to CV.
Whilst I agree with most of the spirit of what you're saying, it's slightly less true when reinforcement learning is concerned as various algorithms there will have various neuro-biological analogies ( experience replay and its prioritized variants, intrinsic motivation, etc).
Convolutional networks are indeed pretty black box at the moment but some bounds on extrema quality are beginning to appear from the connection with statistical physics. Warning : this is hardcore. arxiv.org
|
I read through about one quarter of the comments and am baffled at how many there are discussing APM restrictions and such. In the last pages I started seeing a lot more posts with understanding of what Deepmind's programme is doing. There is no pre-programmed micro, nor are there any rushes. The bot is supposed to learn by trial and error, as well as witnessing replays of games. The bot doesn't know what it should do with its APM. It might be just a lot of spamming for the first 6 months of training (the bot will be self-taught). The bot will know that the goal is to see the win screen, and not much more, when it starts.
This bot will teach itself how to play. No one will teach it strategies nor tactics. The bot will have access to replays and may play the game.
Deepmind want to see if their programme can learn how to play StarCraft. They hope for it to be able to beat some great player.
|
|
|
|