On March 30 2016 00:58 The Bottle wrote: No bot today in Starcraft is doing a strategy that it gained from supervised or reinforced learning. It's all scripted. If you argue that a bot with unlimited APM and much more sophisticated scripting can beat an pro.
That is not what I meant at all. I said that todays bots are already using input from "coarse grained data", and there is no reason this data can't be used as input to a real self learning AI instead. Re-read what I said with that in mind. The data already exist. No-one have even tried to use it for self-learning.
I know that "some data exists". There is the entire build order of a player and the time each thing was built, which you see at the end game summary. If I were to guess, this is most likely the kind of data the AI uses, since it gets its data from an API. (But I don't claim to know how the current AI works, so if you have a better notion of what it's doing, tell me.) That kind of data is trivial to generate, and an AI that responds to this data from the API is easy to script. But it's useless for Deepmind's purpose. They want to construct a limited information AI algorithm. If, instead, we assumed that Deepmind decided to use an API and essentially map hack, then maybe they can train an algorithm on that "build order" data, and use scripting for the rest. But I think build order is just such a tiny and most uninteresting portion of the strategy involved in SC2 that the resulting "learned strategies" we'll see are extremely uninteresting. Is the "existing data" that you're thinking of anything besides the build order data? If so, what?
I'm not familiar with the script API either (or we should probably call it the map editor, because that's what I think it is), but I think it's pretty safe to assume that the API tells you if something is hidden behind the fog or war, so you don't have to map hack. The map editor seems to be very powerful and l think it's also safe to assume that there is a wealth of information available, but you don't have to use all of it. It's more than just build order anyway. The point is that the bot-scripts use this information today with pretty good results, and I don't think they cheat that much, unless you tell them to. Put this information into a DeepMind AI instead. Or why not have several. A self learning micro AI maybe, and a supervising macro AI. Maybe we should have one micro AI per unit. The possibilities are endless.
Considering that much of the average apm of pro's is produced via spamming keys as they warm up their fingers, even limiting an AI's APM to something like 400 (in SC2) would still be very overwhelming. If the computer can make 400 effective movements in a minute, I imagine it could do some crazy things. I'm not saying it will be able to beat the pro's, but making 6-7 actual, effective decisions per second is quite incredible.
On March 30 2016 02:53 ClanRH.TV wrote: Considering that much of the average apm of pro's is produced via spamming keys as they warm up their fingers, even limiting an AI's APM to something like 400 (in SC2) would still be very overwhelming. If the computer can make 400 effective movements in a minute, I imagine it could do some crazy things. I'm not saying it will be able to beat the pro's, but making 6-7 actual, effective decisions per second is quite incredible.
It most certainly would beat the pros. Remember that AI test of Zerglings showing if you perfectly micro Zerglings vs a siege tank line you can decimate it with fairly minimal losses because you can negate all splash? If weird part about Starcraft is that a majority of the skill comes from mechanics not really decision making. If the AI can execute perfectly microed blink-all ins I don't really see a human reliably stopping it. It isn't the AI is smarter than the human it is just it is mechanically better -- which makes sense its a computer.
On March 30 2016 02:53 ClanRH.TV wrote: Considering that much of the average apm of pro's is produced via spamming keys as they warm up their fingers, even limiting an AI's APM to something like 400 (in SC2) would still be very overwhelming. If the computer can make 400 effective movements in a minute, I imagine it could do some crazy things. I'm not saying it will be able to beat the pro's, but making 6-7 actual, effective decisions per second is quite incredible.
It most certainly would beat the pros. Remember that AI test of Zerglings showing if you perfectly micro Zerglings vs a siege tank line you can decimate it with fairly minimal losses because you can negate all splash? If weird part about Starcraft is that a majority of the skill comes from mechanics not really decision making. If the AI can execute perfectly microed blink-all ins I don't really see a human reliably stopping it. It isn't the AI is smarter than the human it is just it is mechanically better -- which makes sense its a computer.
This amongst the hundreds of other ridiculously stupid but possible once you have super computer doing the micro. For example, imagine if the only marines doing stutter step micro are the ones being targeted by the enemy units so that the other marines stand in place and eventually get a natural surround?
Imagine a medivac that cycles 4 marauders so that you never hit a single marauder. Mutalisks that will NEVER hit an AI controlled Phoenix. A bio ball where damaged units move to the back while the rest of the army keeps fighting to give maximum spread of damage before any single unit is killed? Perfect focus fire so that only the group within range of an enemy unit focuses their attack.
On March 28 2016 14:35 Taf the Ghost wrote: The Mouse-movement and Keyboard entry limitations would have to be imposed. Giving a computer full API access is to cheat, as the API could do things no mouse is actually capable of. We've seen some of the custom AIs over the years, but those functionally produce a new unit-movement control that goes around the game's actual functionality.
Thus, the first thing an "AlphaStar" would need to learn would be how to use the mouse. Haha.
I agree, I think the AI should be designed around the same limitations of a keyboard and mouse. Maybe programming the AI to operate a two hands that control the keyboard and mouse.
On March 30 2016 05:22 Xyik wrote: I don't know why everyone is still so fixated on micro, its been repeated by almost everyone in this thread, and isn't a very interesting discussion.
Because, in a way, its a revealing comment about the nature of RTS games. Most of the talent is because of people's ability to micro his units, workers, and buildings and not strategy. So when you can remove the limitation of control--then no RTS player can actually play well.
I think they key is that the computer should not outperform a human mechanically in any way, at least micro-wise. It should be coded in such a way that it approximates the mechanical skill of his opponent. Otherwise it would be just unfair. It's not interesting unless it is a battle of wits.
On March 30 2016 00:58 The Bottle wrote: No bot today in Starcraft is doing a strategy that it gained from supervised or reinforced learning. It's all scripted. If you argue that a bot with unlimited APM and much more sophisticated scripting can beat an pro.
That is not what I meant at all. I said that todays bots are already using input from "coarse grained data", and there is no reason this data can't be used as input to a real self learning AI instead. Re-read what I said with that in mind. The data already exist. No-one have even tried to use it for self-learning.
No they're not, please give me an example of a single bot for a StarCraft game that uses input that's not given from an API to the game.
Why do you think I mean "not given from an API to the game"?
You are suggesting that existing bots today use the same input data as deepmind would. Deepmind data is just the pixel output of the game. Not what the API provides.
On March 28 2016 14:35 Taf the Ghost wrote: The Mouse-movement and Keyboard entry limitations would have to be imposed. Giving a computer full API access is to cheat, as the API could do things no mouse is actually capable of. We've seen some of the custom AIs over the years, but those functionally produce a new unit-movement control that goes around the game's actual functionality.
Thus, the first thing an "AlphaStar" would need to learn would be how to use the mouse. Haha.
I agree, I think the AI should be designed around the same limitations of a keyboard and mouse. Maybe programming the AI to operate a two hands that control the keyboard and mouse.
Depends on if you constrain it to using a robotic arm to move the mouse, or just have something to move a mouse around. Stuff like laser cutters could be adapted to hold a mouse, with speed and accuracy far higher than that of a human. This one for example can do +-15um repeatedly.
Keyboard also, depending on how you set it up, it will hit keys so much faster than a human that it's not even funny.
It'd be easier to just set a keypress rate limit(10 keys/sec, with 10 seconds of burst up to 20kp/sec every minute), and an acceleration/speed limit on the mouse.
On March 30 2016 10:20 Iodem wrote: I thought they were supposed to have Flash play DeepMind. Is he gonna play SC2 again or what?
Then the matchup should be TvT and if Flash loses he will have to eat his golden mouse! If Flash wins however he gets to eat AlphaGo/AlphaStar!! You heard it here first people, it's a bet!
He won his golden mouse in BW. If anything it should be played there. I would love to see an A.I. abuse all the nifty tricks we came up with to break the game.
On March 30 2016 11:13 StarStruck wrote: He won his golden mouse in BW. If anything it should be played there. I would love to see an A.I. abuse all the nifty tricks we came up with to break the game.
If deepmind doesn't have to use input devices like a mouse and keyboard then it is at a tremendous advantage. Humans have to spend brainpower not only deciding what to do within the game, but also to move their own hands and fingers to make those things happen on screen. Just capping the APM of deepmind at that of a high level player will still still put deepmind at an advantage, because it will be able to use 100% of it's brainpower on the game itself. Humans have to spend a ton of their brainpower on manipulating input devices (I don't know what %, but would be interested to).
I think it will be difficult to create a true apples to apples comparison/competition here, short of building a robot that has a camera to watch the screen and an arm to move the mouse. AlphaGo had to have a human surrogate to place stones on the Go board.
I'll be very interested to see what terms Blizzard and Google decide on for the match.
Honestly I think the interesting part of the AI will be how it responds and gets stuff done depending on the input. AI's can already play engagements as perfectly as possible given APM constraints, the decision making that gets it there is much more interesting.
On March 30 2016 00:58 The Bottle wrote: No bot today in Starcraft is doing a strategy that it gained from supervised or reinforced learning. It's all scripted. If you argue that a bot with unlimited APM and much more sophisticated scripting can beat an pro.
That is not what I meant at all. I said that todays bots are already using input from "coarse grained data", and there is no reason this data can't be used as input to a real self learning AI instead. Re-read what I said with that in mind. The data already exist. No-one have even tried to use it for self-learning.
No they're not, please give me an example of a single bot for a StarCraft game that uses input that's not given from an API to the game.
Why do you think I mean "not given from an API to the game"?
You are suggesting that existing bots today use the same input data as deepmind would. Deepmind data is just the pixel output of the game. Not what the API provides.
There have been no details revealed about this so you are just guessing.
Would it be more interesting if we have DeepMind get setup, then have a pro and programmer decide what "strategy" deepmind should execute perfectly and have them face off another deepmind + pro combo?
Flash with DeepMind micro vs Jaedong with DeepMind micro?