As DeepMind's AlphaGo artificial intelligence continues to shock the Baduk (Go) community with consecutive victories against top pro player Lee Se-Dol, StarCraft has made an unexpected appearance in the spotlight. Google's Jeff Dean singled out StarCraft as a future challenge for DeepMind.
When interviewed by SBS News, Flash responded with guarded confidence.
"Honestly I think I can win. The difference with Baduk(Go) is both sides play in a state where you don't know what's happening, and you collect information—I think that point is a bit different."
this is really cool! i've been following the go match a bit. competitive and learning AI are super fascinating. i also used to LOVE making custom games and watching AIs battle it out in BW and SC2, and the idea of watching strategically intelligent AI play the game excites me
if this gets serious and the AI is legit, i think showmatches would be an awesome way to generate interest in the game
AIs are able to have unlimited APM which allows for not only the obvious things such as impeccable micro, but also a significant boost in economy by micro-managing their workers (http://www.teamliquid.net/forum/brood-war/484849-improving-mineral-gathering-rate-in-brood-war).
Furthermore, when you think about it. AIs won't have as much of a problem with the "veiled information". For an AI it should be possible to determine based on unit building times and worker gathering rates to predict what an opponent reasonably could have during the game.
What will be the most difficult in my opinion is to have the AI make decisions such as where to attack, when to attack, multi-pronged attacks, when to get certain units and how to use spells such as dark swarm properly. It seems to me like it would be fairly easy to trick and abuse the behavior of the AI.
if this is done on bw, there is even less chance to win, the AI would essentially playing without the buggy AI and mechanic barriers that made bw hard.
On March 10 2016 23:36 Pandemona wrote: Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!
skepticism is natural, but i'm sure Go players were saying the same thing, just like chess players :D
i had read somewhere that these researchers thought an ai that could beat the best humans in sc1 would take 5-10 years. if thats true its unfortunate since the level of top play likely wont keep up to then (not enough interest/pros getting too old or other responsibilities/wrist problems etc)
i hope they are going to do this. I used to say something like the following: Starcraft AI's will easily crush all opposition if it is taken seriously as a research project by someone other than bachelor students. I imagine that is still the case, even if some people curiously said that because it is more difficult to quantify states in Starcraft it can't be done.
Honestly, the outcome will come down to its date: today? Flash. In a couple of years? AI 100%. Given the fact Google won't show up until they are ready they have already won, it just hasn't happened yet.
Even with APM cap a human will have very little chance. Given it's an RTS with FoW and such you can say AI technically can drop a game once in a while, but winning a long series after the AI is ready to show off is something that I do not expect.
"Time has been on my side, [...] fortunately there is no death sentence in this country"
I'm quite confident FlaSh/a top BW player would win, at least if they do this before 2030 or so. It's much harder to determine the optimal move in a RTS than in a board strategy game like chess or Go.
This is not just a bot mind you - it's a thinking AI that responds to situations accordingly to its accumulated experience. A bot is something that just has a decision making tree predetermined by its developer. This one develops its own decision making tree. On its own.
I don't think this will become a strategy game. It is more likely that the impeccable micro attack will just dominate the game. How will Flash counter a perfect Drone hit-and-run?
This is not just a bot mind you - it's a thinking AI that responds to situations accordingly to its accumulated experience. A bot is something that just has a decision making tree predetermined by its developer. This one develops its own decision making tree. On its own.
This is a pretty interesting proposition. I think it depends on how much google will let the ai go beyond human capabilities. There already exist stupid micro ai, and if google lets the ai micro flawlessly it shouldn't be a contest. Of course, if they limit it to human capabilities (Like a hard cap of 400 APM) , then I think a human will always have the ablity to win.
On March 10 2016 23:57 WiSaGaN wrote: I don't think this will become a strategy game. It is more likely that the impeccable micro attack will just dominate the game. How will Flash counter a perfect Drone hit-and-run?
well, the same can be said about harassment by top terrans. how can you counter it? have vision, see it coming and have enough to fight it. even if the mechanics are "perfect," in sc2 not every fight is won by micro, and the kinds of micro that are important (casters, lurker positions etc.) are often quite difficult for AIs to grasp.
it's a fascinating experiment. no one is saying it will be the same as human 1v1. but how can you not be intrigued by the challenge? it's science!
On March 11 2016 00:00 Monochromatic wrote: This is a pretty interesting proposition. I think it depends on how much google will let the ai go beyond human capabilities. There already exist stupid micro ai, and if google lets the ai micro flawlessly it shouldn't be a contest. Of course, if they limit it to human capabilities (Like a hard cap of 400 APM) , then I think a human will always have the ablity to win.
i think that even an AI with perfect mechanics has the capability to fail strategically by allocating its resources poorly. if it's spending 800 APM on making an overlord patrol in the corner of the map (and i've seen SC2 AI do things like this), APM isn't the issue. as long as it's not "cheater AI" with vision or extra resources it should be a very interesting idea.
i definitely see what you are saying, but there are a lot of nuances to this concept
Of course, if they limit it to human capabilities (Like a hard cap of 400 APM)
I guess AI doesn't need to "check" every second x different keybinds, it should be drastically more efficient with its APM allocation. I'd rather limit cursor movement speed, so the AI can't micro completely unrealistically, but at "human" speed. However even with this limitation the AI will be more efficient with its decisions of "what exactly to prioritize" given sufficient amount of practice time.
Following the go match a little bit right now i alreay asked myself what would happen in real time games like starcraft. Perfect mechanics alone would allow for mediocre strategy to win games i would imagine. Interesting for sure!
I Believe it would be hard for humans to keep winning, perfectly microed units is enough to basically guarantee success. Imagine harassing a computer that pulls probes perfectly as soon as your Oracle/mutas/dropship is in vision. Perfectly microes every individual probe for example.
I Believe a super safe opening into blink stalkers would win against almost anything, imagine perfect blink Micro, perfect macro and just retreat and regenerate Shields when needed. AI could split perfectly against aoe, split their army up perfectly and on top of that Micro and macro perfectly. Lets be honest if you can Micro perfectly there are many ways to guarantee the game, reapers against zerg early game, stalkers against terran when they have only marines or at least pre conc Shells. This is barely a discussion, Flash can win now but he can't win forever.
Realized I automatically was thinking sc2, I don't think the arguement is decently valid for sc1 as well though.
In the end it is a fight of strategy against 10.000 apm.
Even if you would limit the apm, the computer can do inhuman things. Thats the difference between a Board Game and a RTS, where your input is extremly minimal while your strategy is maximised. A AI that learned enough to be strategically on pair with very decent players, will always win thanks to the unlimited multitasks and inputs.
And oh yeah, what Flash is saying here, that was also said by the best GO player in the world. He thought just some quick 1M $ to win, computer will never beat him. Now he has lost 2 out of 2 games so far.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
That looks like some interesting proposal.
It is more like you should allow the engine 200apm. An engine will know to not waste actions, while most of what hero does is spamming. You can split marines perfectly with 600 apm I'm pretty sure, allowing the engine 10000apm won't make that much difference.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
That would make things "more fair", but the reaction time of the machine will still be 0ms, will the human players has a reaction time.
I dont think it is possible to make a "fair" battle between humans and programms in Starcraft, as Starcraft has not only its strategic part, but its mechanical part. When a Car drives 100m faster then Mr. Bolt, we would also call that unfair.
The computer is a self learning A.I. in this case. As soon the computer finds out, that he will win with his stellar micro, he will go for a SCV rush every game.
We are also not talking about some programm running your local Core 2.
Deep thought is a neural network based on a supercomputer. It constantly became better by playing itself, the same would apply to Starcraft. At the moment the machine finds out, that humans cant micro like the machine can, the machine uses its stellar micro to win.
If the AI is godlike and reacts as soon as an ennemy unit shows a single pixel there isn't much a human can do. As long as the Ai can out micro and out macro the human, strategic thinking won't bring much to the table in a RTS like starcraft.
However it'd be cool if Google built a robot with hands and eyes to give him the same restraints as the player. Idk if it's feasible but the AI could only see what the robot eyes see and the robot would have to hit the keyboard to make things happen instead of just making them happen out of nowhere like current AIs do. It'd be a nice challenge for robot builders to build humanoid robot that can beat a starcraft players with equal weapons (Keyboard + Mouse)
When opponent microes like that there is no room for outplay him strategically i think..
There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio. In those situations perfect micro doesnt give you that much of an advantage.
When opponent microes like that there is no room for outplay him strategically i think..
There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio. In those situations perfect micro doesnt give you that much of an advantage.
Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.
When opponent microes like that there is no room for outplay him strategically i think..
There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio. In those situations perfect micro doesnt give you that much of an advantage.
Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.
On March 10 2016 23:36 Pandemona wrote: Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!
No way. After they become sophisticated enough it would crush a human.
On March 11 2016 00:11 Temporary Happiness wrote: I think these 2 videos tell who's gonna win if this is done in Sc2:
Video 1
Video 2
When opponent microes like that there is no room for outplay him strategically i think..
Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.
Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.
On March 11 2016 00:11 Temporary Happiness wrote: I think these 2 videos tell who's gonna win if this is done in Sc2:
Video 1
Video 2
When opponent microes like that there is no room for outplay him strategically i think..
Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.
Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.
I honestly wonder if Flash losing to DeepMind will result in simpler RTS games in the future. The problem with Starcraft is not that it is mechanically demanding, but that mechanics is disproportionately effective compared to actual strategy. A player skilled in mind games will lose to a player with better mechanics since he probably won't have the mechanics to execute his trickery in the first place.
Also, the fact that Starcraft is so mechanics-focused may give rise to new hacks that automate the game without making it obvious. If you guys remember the CS:GO fiasco last year, a progaming team was VAC'd for using subtle hacks. The hacks were not obvious and they amounted to a tiny boost in that team's effectiveness. At those levels, even the slightest advantage will make a big difference. It'd be like Flash vs. Flash, except one of them has a hack that automates SCV production in one command center. In the grand scheme of things, that's not a huge deal, but when both players are equally skilled, even the slightest advantage can tip the scales his way.
Games where mechanics matter less and strategy matters more might be the result of a human progamer vs. AI matchup.
On March 11 2016 00:11 Temporary Happiness wrote: I think these 2 videos tell who's gonna win if this is done in Sc2:
Video 1
Video 2
When opponent microes like that there is no room for outplay him strategically i think..
Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.
Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.
You could do similarly ridiculous stuff in BW with infinite APM... Also LOL at Comparing SC2 to tic-tac-toe.
When opponent microes like that there is no room for outplay him strategically i think..
There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio. In those situations perfect micro doesnt give you that much of an advantage.
Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.
Yea... no :D
This is what I've always said. It's usually countered by people saying burrow micro is soooo much less efficient than say, blink micro. Which is true, but if a bot pushed burrow micro to its limit I bet we'd see pros do it a bit more often after we see that it DOES work.
On March 11 2016 00:11 Temporary Happiness wrote: I think these 2 videos tell who's gonna win if this is done in Sc2:
Video 1
Video 2
When opponent microes like that there is no room for outplay him strategically i think..
Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.
Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.
You could do similarly ridiculous stuff in BW with infinite APM... Also LOL at Comparing SC2 to tic-tac-toe.
Indeed. In fact in BW I think there was one point where a pro microed a single marine to kill a lurker. I could be wrong on that but I think it was a thing.
I actually thought it would be pretty easy to get a super good BW AI.
You can teach it all the optimal pro build orders. You can show it whatever flash replays you can find. Have it learn literally how to play exactly like flash for a template skill level as much as is possible. Then implement management and micro subroutines i.e for a given strategy it will always be building out of X rax marines Y facts tanks/vults Z ports vessels and SCVs if relevant. Then for micro: TvZ: make sure it can destroy lurkers with marine micro, react instantly to mutas coming in range, always run from swarm, micro perfectly behind minerals from a dropship,perfect irradiate splitting and scourge dodging TvP: Perfect vulture kiting and target firing on zealots, mine placement that doesn't put your units in danger. Perfect target firing of tanks on dragoons and ignoring zealots if you'll friendly fire. TvT: Perfect range calculation for tank placement and scans etc.
Sure you might fall behind on decision making and playing vs some obscure strats but with perfect mechanics and copying the best players style it shouldn't be that hard for a big project team to handle. The fact that replays exist give a template for reaching a high level of play instantly. It can play itself vs flash or jaedong 1000 times a day with a learning algorithm for example.
I think it make it fair you'd have to limit it so it has to use a cursor and keyboard and they are limited each to certain speed/APM. That way the AI is under the same PHYSICAL limitations as a (very very fast) human and has to figure out how to win "mentally" from there with the same limits of spending your attention as a human and algorithms to decide how to spend that attention. You could also change the limits and see how certain strategies become better for slower players. (hue hue 100 apm bonjwa DeepProtoss)
It really depends on how you limit the computer. If you give it infinite APM and near perfect micro I think an AI can win easily just by using its superior mechanics. If the AI is constrained to more human mechanical capabilities, then it is a very difficult problem to solve.
The other Flash plays Broodwar with unlimited unit group cap, with hotkey-groups for buildings, with 0ms reaction time, pixelperfect minimap awarness. Who wins?
A AI in Broodwar plays the game like engine limitations do not exists. Its macro is perfect without spending 2ms in base, its control will be stellar, its awareness will be unmatched. The only thing the AI might not be perfect is strategy and decision making. But first of all, Deepthought in Go has shown us, that the AI is able to improve from "I beat some Euro Scrub" to "GSL Champion" in only 6 month by playing itself and learning from these games. And secound, the machine learns. And the machine would learn soon, that it wins game, that do not go into macro. Unlike Go, where Deepthought becomes better and better with each stone more on the board.
I dont see any player winning either BW or SC II against a neuronal network AI without HARD limitiations to the input.
The only way for it to be fair is to make a robot+AI actually playing with mouse and keyboard, otherwise with perfect micro and stuff it'll win eventually really easily but it's cheating.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.
On March 11 2016 02:03 Goolpsy wrote: Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.
Game might seem like it, but Starcraft has far fewer strategies than chess.
I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.
The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.
Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.
Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.
On March 11 2016 02:03 Goolpsy wrote: Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.
Game might seem like it, but Starcraft has far fewer strategies than chess.
I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.
The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.
Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.
Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.
But thats the point. Deepthought is not like Deep Blue brute forcing its way into the game. It is making a much more soft approach by learning to play it. The neural network architecture is not ment to brute force the game and "solve it". It learns the game by millions of games against itself and replays and learns from that, takes conclusions. Deep Blue just tried out millions of next moves when playing chess. Deep Thought learned to play the game and knows what to do where.
2. There are virtually endless scenarios in Starcraft and I am not sure if its possible to teach the bot everything. He would have to play/analyze thousands (maybe even more) of games to actually learn how the units interact with each other and how building certain units in certain moments affects the game. There is simply much greater complexity. Also as Flash pointed out, this is game with incomplete information.
So what's the difference between a normal game AI and Google's work?
I mean the default AI can cheat, does the functioning of the AI we've got rely on some gimmicky tricks not suitable for a human wisdom vs human creation scenario?
Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.
sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.
I mean the problem is if Flash's wrist can keep up with him. He's playing BW "casually" and he already say that his arms hurt, to return to his former bonjwa self would be hard considering his injury.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.
btw does broodwar has a hardlimit for command accepted per frame? like 12 per frame or something.
I wouldnt be suprised if there was, at least a limit to buffering
On March 11 2016 02:03 Goolpsy wrote: Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.
Game might seem like it, but Starcraft has far fewer strategies than chess.
I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.
The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.
Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.
Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.
But thats the point. Deepthought is not like Deep Blue brute forcing its way into the game. It is making a much more soft approach by learning to play it. The neural network architecture is not ment to brute force the game and "solve it". It learns the game by millions of games against itself and replays and learns from that, takes conclusions. Deep Blue just tried out millions of next moves when playing chess. Deep Thought learned to play the game and knows what to do where.
"not like Deep Blue brute forcing its way into the game" vs "It learns the game by millions of games against itself and replays and learns from that". I don't think you quite know what "brute force" means.
I'm only poking you a little. The technology and approach are rather different between the computer systems, but they both still work by being able to call up almost the entire Game State possibility and knowing how to process through the information. This is what computers are supremely good at, but they it also shows their limitations. Though I'd be remiss if I didn't point out that most of these public games with top pro's always benefits the Computer in one very specific manner: they can analyze all of the publicly known games of the Master, allowing adaptation to the Master's play style.
Just the unit efficiency and micro possibilities for the AI are insane. That alone would make it insanely difficult to beat because you're guaranteed to always be outmacroed and viciously destroyed in any engagements involving similar numbers. There are videos out there of 12 goon vs 12 goon fights where the AI wins with 12 goons alive.
Some form of APM cap seems fair in my book, or else the AI isn't winning because it is making better decisions than the human, it's winning because it's exploiting micro in a way no human could ever imagine.
I depend what they want to do, capping the APM would be a lot more interesting. If not of course you would win a game where you dodge every single stalker shot by hopping in and out of a medivac, in fact just doing a worker rush would probably work, but that is not realy extraordinary.
there's no way a human would beat an AI in starcraft 2. take the micro bot that someone wrote here on TL as an example. SC2 has a theoretical notexistant skill gap which is beyond human reach. dunno about bw.
And that's why there should be a cooldown on each and every action you can do in SC2. To have a predictable behavior of the units independent of the capabilities of the player to exploit certain things. It would be so much easier and satisfying to balance, for instance, banelings if marines were unable to split.
This match would have number of interesting implications chief among witch are BO decisions. If AI is strictly superior microing units theres no reason not to assume that it will try taking advantage of this and go for 1 base all-ins most of the time in order to force micro intensive early games. It would be cool if the AI has some doubt about his opponents skill and actually needs to confirm its superiority in micro in order to feel confident in winning and going for all ins. Humans already do that as we all know from Bisu being annoying as much as possible with his scouting probe. Now imagine AI controlling this, it will never die by mistake.
I recall some specific types of AI techniques used for mutalisk micro in one of the BWAI competitions. I wonder if the point of the deepmind project is to find applications for neuralnet type algorithms or whether they would be okay with discarding neural networks if other ai techniques would work better. I guess it is the former, since that must be the reason google funds it, so that eventually they can have self driving cars and smarter search results.
I think neural networks are used by the planetary annihilation ai designer, I used to read some of his explanations for why that was the future of rts ai, but I don't know if he is still working on similar things or where one can find information on this. iirc the ai could learn how to micro by playing against itself over and over
On March 10 2016 23:35 B-royal wrote: What will be the most difficult in my opinion is to have the AI make decisions such as where to attack, when to attack, multi-pronged attacks, when to get certain units and how to use spells such as dark swarm properly. It seems to me like it would be fairly easy to trick and abuse the behavior of the AI.
I've already played a bot on BW and engagements were actually the bot's biggest strength. It's constantly scouting with zerglings, has an incredible overlord spread for a perfect map awareness which allows to always perfectly surround your army or set the best possible concave possible.
As for stuffs like swarms, how many times did your defilers die before casting the swarm because you weren't selecting/clicking fast enough?
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
Learning by iterations and bruteforcing is not the same thing. On the same note; Chess is far from solved.
There is a continuous evolution and progression of chess engines, the best one currently being 'Komodo'.
From the Wiki: "Komodo heavily relies on evaluation rather than depth". I've been following the TCEC tournaments and heard the developers talk about certain aspects of the making of AI.
Evaluation is derived not from knowing that a certain position withs 58% of the games, but by piece value and pieceplacement + structures.
If you have a chain of pawns (x= pawn, o = empty space)
oxo or oox or oxo xoo xxo xox
They do not have a evaluation of 3 (1 per pawn), but more than 3. In the same way, having an isolated pawn (A pawn with no friendly pawns on either adjecent diagonal), it is usually very weak and worth less than 1.
Why is this important ? Being able to evaluate a situation through dividing out the area/map is almost directly transferable to Starcraft. It can be used for unitplacement to scout, defending chokepoints, building placement or engagement angles (without too much processing even).
As for incomplete information; it is not all that hard. you don't need to know all units produced to calculate possible strategies available or techtrees possible at a current time. I know it seems like it, but you don't have complete information in either chess or Go. (not that is it not a game of complete information, as you can't hold all possible iterations and payoff thereof in memory, )
For this to be an interesting challenge for DeepMind the AI would have to be limited to 400 APM and it would have to interact through a virtual keyboard and mouse, i.e. it would have to actually drag the cursor to box units, etc, so it can't do micro that is (in principle) impossible for humans to do.
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
I suspect there would be a variety of cheese tactics where the AI could simply out-micro the players to such an extreme degree that they can always win in the first 10 minutes.
A conservative lower bound on the state space of brood war is 10^1685. This is many orders of magnitude above the state space of Go, which is 10^170. Whats more, the branching factor is 10^50 to 10^200, compared to <360 for Go.
On March 11 2016 02:30 Cuce wrote: Flash hit the key point I think.
Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.
sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.
Starcraft is what we call a POMDP (partially observable markov decision process), there are algorithms for solving these types of problems, for example recurrent neural networks, but no one has tried applied it to something as complex as a full game in Starcraft.
Would make much more sense for DeepMind to have a go at DotA; in a game where mechanics mean so much like StarCraft, of course, an ai will be able to rape humans - its just a question of time.
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
Imho, I don't know if we human can actually stand a chance on this. After watching the Go games, the AlphaGo's play style feels like something next level to me. In the two games played, the bot fell behind in the early-mid game pretty badly, but it just win by out-calculate Lee Sedol in small skirmishes. By the end, the bots won. Feel like playing some one with perfect blink stalker micro. No matter how badly his status is, as soon as his blink is ready, you start to trade badly here and there. Soon, you find yourself in an awkward position that you cannot walk out of your base and you cannot expand either.
If the Deepmind team goes full try-hard mode, some micro bot can out-micro human players pretty hard, which is nothing challenging to them. Personally, I would love to see a bot that plays like a human, fetching information from the game through the output image instead of the computer memory. and this might make the game fair.
On March 11 2016 03:09 disciple wrote: This match would have number of interesting implications chief among witch are BO decisions. If AI is strictly superior microing units theres no reason not to assume that it will try taking advantage of this and go for 1 base all-ins most of the time in order to force micro intensive early games. It would be cool if the AI has some doubt about his opponents skill and actually needs to confirm its superiority in micro in order to feel confident in winning and going for all ins. Humans already do that as we all know from Bisu being annoying as much as possible with his scouting probe. Now imagine AI controlling this, it will never die by mistake.
I think AI should go for a late game instead. it has not only perfect micro but also perfect mechanics (maybe not intuitive and predictive macro but still) perfect multitasking, perfect minimap. more stuff to do would mean more adventages AI will get.
Yes more tiem it gives to the player means player will have more options and tricks to pull of a win, but perfect micro can shutdown quite a alot of stuff.
Like many people here, I think it's weird to compare an AI that can bypass the physical mechanics of the game to a chess or go computer.
What I love about Starcraft, and what makes it my favorite esport, is that it's a *physical sport* in addition to a strategy game. If you take away the need to physically manipulate the mouse and keyboard, it isn't really the same game. That's why it's different from chess, or go, or poker, or hearthstone.
The AI-vs-AI competitions are still kinda cool though.
1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).
2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.
1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).
2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.
Their limitations NOW you mean. A sophisticated AI built to play SC2 (when it is ready) will destroy any player easily. Regardless you don't factor in the crazy levels of micro you can pull off with infinite APM. Dropping three areas at once while still macro'ing perfectly WHILE stutter step micro'ing each drop is something a human will never be able to do yet it is feasible that a computer could potentially do those things.
1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).
2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.
Their limitations NOW you mean. A sophisticated AI built to play SC2 (when it is ready) will destroy any player easily. Regardless you don't factor in the crazy levels of micro you can pull off with infinite APM. Dropping three areas at once while still macro'ing perfectly WHILE stutter step micro'ing each drop is something a human will never be able to do yet it is feasible that a computer could potentially do those things.
I feel like you can solve that by actually having things be possible, like the computer can't be looking at 3 screens at once
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor. The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc) If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.
But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example) It being a game with limited information is the only interesting aspect about all of this i can see tbh
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor. The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc) If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.
But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example) It being a game with limited information is the only interesting aspect about all of this i can see tbh
The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor. The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc) If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.
But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example) It being a game with limited information is the only interesting aspect about all of this i can see tbh
The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.
Just to be clear, let's say you place building X at place Y or Z, that are two different "board states" right? Even if it means that placing your first supply depot in the enemy base probably isn't all that smart?
I get that it isn't "intuitive" for the AI like for a human being, but there surely are tons and tons of these things in sc2. Even something like: I move my army (or even single marine) a few tiles on the left, it probably won't be the biggest deal but it surely is considered a different "board state" ? If we want to play 100% perfectly these things have to be considered, but overall it probably doesn't matter at all i would imagine. I don't think the same is true for GO? (i have no idea about GO though) My statement was probably just simply this: A high lvl GO players surely possesses more tactical/strategical understanding than a starcraft professional, you don't have to be highly intelligent to play starcraft at a high lvl, the same probably isn't true for GO/chess. i think? (i can see why this isn't all that relevant to the main topic though ^^)
The number of game states doesn't really matter any more since we aren't using brute force calculation and there are clear ways to evaluate strength of play (economic advantage, supply advantage)
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself. The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor. If you don't do that then the result is questionable at best as far as i can tell
edit: and even then you will get a device which is superior to human flesh, so i dunno.. AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor. The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc) If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.
But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example) It being a game with limited information is the only interesting aspect about all of this i can see tbh
The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.
Just to be clear, let's say you place building X at place Y or Z, that are two different "board states" right? Even if it means that placing your first supply depot in the enemy base probably isn't all that smart?
I get that it isn't "intuitive" for the AI like for a human being, but there surely are tons and tons of these things in sc2. Even something like: I move my army (or even single marine) a few tiles on the left, it probably won't be the biggest deal but it surely is considered a different "board state" ? If we want to play 100% perfectly these things have to be considered, but overall it probably doesn't matter at all i would imagine. I don't think the same is true for GO? (i have no idea about GO though) My statement was probably just simply this: A high lvl GO players surely possesses more tactical/strategical understanding than a starcraft professional, you don't have to be highly intelligent to play starcraft at a high lvl, the same probably isn't true for GO/chess. i think? (i can see why this isn't all that relevant to the main topic though ^^)
Well your first depot position is a bad example because it's actually very important (and even if it wasn't the AI would probably still figure out the best place for it). I get what you're saying though, like if you place your 4th Gateway one space to the left it's a trivially-different game-state which I'm sure feature in Go seeing as the board has 2 lines of symmetry. Even if you remove stuff like that and try to dumb the model down as much as possible you're still going to have a ridiculous number of game states. StarCraft BW and 2 both even have some random factors (more so in BW), even though they are minor they also would increase the complexity of things. How much 'human' strategy is needed is up for debate, but for an AI with mechanical limits conquering StarCraft will be far far more difficult than Go.
Honestly, I'm pretty sure it would still obliterate any player even with a strong APM cap. That's the whole point of machine learning: cap it at 100 APM, and it will still find the single most optimal use for every of those actions. Add a 0-reaction time and a perfect decision making, and i can't even imagine how Flash is supposed to win.
Actually, an interesting challenge would probably be to find the minimum APM it needs to win...
On March 11 2016 07:11 disciple wrote: Considering the careers savior and stork had, I think some APM between 80 and 120 will be sufficient
Lots of pro APM isn't really effective APM. Hyuk once had all of 4 zerglings to defend when Flash caught him unaware with a rush. He was hitting 800 apm doing god knows what.
On March 11 2016 08:37 Shinokuki wrote: Why is this in sc2 section if alpha go is playing bw and flash mostly played bw and won lots of championships..
Because everybody knows OP so telling him he did something wrong is bad in TL's eyes
like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.
On March 11 2016 08:37 Shinokuki wrote: Why is this in sc2 section if alpha go is playing bw and flash mostly played bw and won lots of championships..
Because it's very relevant to SC2 as well, and more people will see it in the SC2 section.
thankfully for team humans Terran is probably best equipped to fight an AI that would seek to abuse unit control and early builds and Flash's strength is prediction and forcing games to extend longer
Of course "playing StarCraft optimally" would be really cool - I would love it if the game could be 'solved'. Similar to watching some speed-runs that play full-tilt, ultra-risky, and after the 500th try they finally get that perfectly lucky run (such as the Deus Ex 1 or Jedi Knight: Jedi Outcast runs on SDA).
The problem is, DeepMind playing Go and DeepMind playing Starcraft is not a valid comparison. 'Solving' Go doesn't guarantee you have 'solved' Starcraft, because Starcraft presents bigger, different types of challenges. Here are the two big ones (I'm no comp-sci or AI expert, btw, these are the obvious ones).
1. Limited information (which is the point Flash is making).. In Go, both players can see the entire game state, not in SC. You have to react pre-emptively, you have to get observers just in case. Working with limited information is hard (and don't get me started on mind-games, series strategy, or going on tilt). For example, there are two places your opponent can expand to, you can only afford to scan one of them. You can use the process of elimination, but how the hell will the computer teach itself to do that? It's a new category of idea. It's not just 'push the buttons slightly faster and more precisely', it needs to THINK, it needs to teach itself this new 'process-of-elimination' mechanic.. Sure you can hard-code it to play, assuming the opponent is using a rational build order - but that predictability is straightforward to subvert.
2. Computational throughput. The whole fun of RTS > turn-based, is the real-time trade off: "Do I commit to my current decision" OR "Do I hold out for a better decision". Computing more takes more resources (time), if you wait too long to act they'll kill you. I don't enjoy Chess for this reason. In SC, if I send my units to the far side of the map, then change my mind, I can undo the 'badness' of the situation if I change my mind soon enough. In Chess, you're not allowed to undo your move up-to the point of the opponent responding, You commit, that's it. Now how the hell do you program an AI to on-the-fly adjust it's computational depth, where sometimes it thinks a lot, and other times it knows just to act (Flash for example, can sim-city when he needs to, but other times just throws down depots messily to not get mentally slowed down). The way humans manage this balance is to practice so much, they delegate 'thinking' to 'instinct', they don't think about the right move, they act according to how they feel in the moment. But it works, because 'how they feel' is trained to instinctively make the right decisions. They don't compute, they do by impulse. THAT is HARD for an AI!
Of course AI could multi-task and micro better than humans, but the real challenge is dealing with limited information (scouting, assuming, and adjusting, as opposed to sticking to your cookie-cutter 'optimal strategy'), and having good decision-making fast enough (rather than searching DEEP, which takes time). Oh and needless to say, Go's mechanics are vastly simpler than that of SC. Economy, defense, attack, tech switching, positioning, harassment ... it's a whole 'nother level of difficulty to program for!
On March 11 2016 08:49 travis wrote: like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.
On March 11 2016 08:49 travis wrote: like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.
btw does broodwar has a hardlimit for command accepted per frame? like 12 per frame or something.
I wouldnt be suprised if there was, at least a limit to buffering
There is a buffer yes. When you exceed it StarCraft Brood War wont process any further commands. Thus you cannot simply spam apm all the time.
In general, for the people who think the bot can win based on its high apm alone: If that was true then the Berkeley Overmind would have defeated Flash already. As a matter of fact, micro-management is currently the biggest issue in the top StarCraft AI bots. This has to do with the fact that micro-management is in the complexity class EXPTIME . So the main issue is deciding where to attack/move based on the information you have. High apm isn't going to help you if you don't know what to do with it.
Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.
The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).
Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.
Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.
It's rather easy to write programs that play Poker well, for instance (discount the poker face though).
Imba Ai goes 3 rax reaper every game no matter what and wins every game
Don't say "solved". Chess is not solved, Go is not solved.
But the point that the breakthrough is learning is very interesting. SC2 may not be too different from Go. While its not solved, chess can be played very well with processing power brute force. But its interesting that chess engines have a big game database, specially in the early game the computer checks the known positions instead of trying to find out everything possible from the start. This way it can go deeper in positions that make sense and don't bother looking at silly moves.
It's only a matter of time until artificial intelligence can defeat the greatest of us at all games. After that, they will only lack the ability to artistically express and describe things that are valuable to humanity, because they don't know what it's like to be human.
Eventually, they may be able to understand that, and create art, too.
That is why I support cybernetic enhancement for humanity. If we do not find a way to give our brains the ability to do what computers can do, then we will be doomed to an existence as an inferior weaker species while artificial intelligence takes care of every thing for us.
On March 11 2016 10:54 vOdToasT wrote: It's only a matter of time until artificial intelligence can defeat the greatest of us at all games. After that, they will only lack the ability to artistically express and describe things that are valuable to humanity, because they don't know what it's like to be human.
Eventually, they may be able to understand that, and create art, too.
They could describe art, they could create good art. The day they can enjoy it i quit.
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
On March 11 2016 11:08 chipmonklord17 wrote: Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.
EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
On March 11 2016 11:08 chipmonklord17 wrote: Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.
EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.
Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
On March 11 2016 10:35 rockslave wrote: Everyone is missing the point (including Flash).
Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.
The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).
Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.
Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.
It's rather easy to write programs that play Poker well, for instance (discount the poker face though).
Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.
A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening. gives me chills just thinking about that possibility. that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win. starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
On March 11 2016 10:35 rockslave wrote: Everyone is missing the point (including Flash).
Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.
The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).
Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.
Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.
It's rather easy to write programs that play Poker well, for instance (discount the poker face though).
The thing about Sc2 though is that it is different.
In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.
If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.
Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.
I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
On March 11 2016 14:47 beg wrote: @ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
On March 11 2016 14:47 beg wrote: @ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
On March 11 2016 14:47 beg wrote: @ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.
I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.
There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
On March 11 2016 10:35 rockslave wrote: Everyone is missing the point (including Flash).
Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.
The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).
Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.
Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.
It's rather easy to write programs that play Poker well, for instance (discount the poker face though).
Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.
A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.
Point being, you maybe can get away with a simplified game state representation.
On March 11 2016 15:02 ETisME wrote: Actually it makes me wonder what would two deepmind do if they were to play against each other. We may even see a whole new meta developing
Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.
Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
You can have that today with human players if you remove the mechanical stress, leaving more room for actual thinking.
That's the core problem why starcraft is boring to play and boring to watch for most people: mechanics play an overwhelming part in winning. You can get to GM just by cannon rushing or 4-gating mechanically well, and I'm sure a bot would win GSL just by worker rush. That means players have to completely know their maps and chose a more or less static build orders because there's no time in the game to do think.
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.
why are people talking about insane micro? Give it some unique quirks, perhaps, but "hurrr insane micro ai" is fucking stupid -- completely defeats the point if you give it perfect mechanics where it macros exactly on point, and micros 10 stacks of 11 mutas at once - pointless and stupid. I'm cringing reading comments discussing the micro mechanics and it being unstoppable.
Make it play like a human. Don't restrict the APM - It's not how algorithms operate. They'd had EAPM close to 100%. Restrict that instead, to a human level. Make it so it's actually a contest in natural ability - see if it can micro logically better from splitting, positioning, and general human-tier control, rather than by just maneuvering ridiculously. Make it execute build orders, rather than a 2 hatch muta all in every game with impossible micro. Making it play like a human and contest in a way that's human-esque is what makes it interesting, otherwise no human can stop even a perfect 4pool.
Besides from that, I think that at this stage it'd be close if it was to go up against Flash shortly, with Flash pulling ahead. However, if Flash was at his peak in 2 years, hypothetically, as mentioned before, if the bot was just fed brood war, I think he'd have no chance. And it'd be fascinating to watch how it plays.
On March 11 2016 18:18 heqat wrote: The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.
Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?
On March 11 2016 18:18 heqat wrote: The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.
Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?
Sorry yes, it would be BW. Regarding your point, what I mean is that for a perfect test, the AI should use the same user-interface than a human. It should take decisions using a flat 2D picture and control the game using hotkey, scrolling, etc.(don't need a physical robot, just wire the data to the AI software). In regular game AI (such as SC2 AI), the software has access to the complete game internal state and can take decision at every step by simply checking unit positions, states, etc. with some specific rules to avoid cheating (like preveting the AI to access non-visible units).
Now I guess it would become much more difficult for the AI if it has to play from the exact same user-interface than a human (which makes sens for a true SC human/machine match, contrary to Go/Chess where user-interface does not change the result of the performance). It would require some very advanded real time visual recognition algorithm for instance.
after reading some interviews, I think deepmind team just used starcraft as a point of reference because it is famous and strategy game, not aware that mechanics plays a huge part of the game.
Anyway I really don't think it is going to pose any challenge for the AI. I am not an expert but certainly it can just scout every once awhile and deduct what is the most possible and threatening strategy/timing coming in and then win by perfect attention to everything, perfect micro, perfect reactionary decision etc.
Each harass/engagement just limits more and more uncertainty for the AI.
On March 11 2016 13:01 ZAiNs wrote: Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.
A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
That is a fair point. But I think you can break a game in several mini-games, having a little algorithm to guess who has the advantage, based on material advantage, positioning, etc (just as you said they did for Go).
While Go can be perfectly modelled, the number of possible states is intractable. Just as you need heuristics to cut the search tree in table games, you can also "cheat" in SC by having sort of a hash function on states. That's what I meant by parametrization earlier: a lot of the work involved in building neural nets is choosing which are the inputs.
By the way: I don't really know anything about what I'm saying. I just played with machine learning, never studied it seriously.
People severely underestimating the difficulty of achieving an effective AI for BW. As someone has pointed out it's not going to have access to the game state beyond seeing a 800x600 2D image in real time. It may see dots on the mini-map but it's not going to know what it is or how to properly react without moving it's "screen" there. Obviously it will have speed but...
...stuff like, how does the AI react to a map (building placement, etc) it's never played on before? What if there's no immediate natural and it typically fast expands? When it sends it's scout out onto the map, goes down the ramp and sees no natural... does it start looking for one? Scout for the enemy first? Does it change it's build order to a one base play when it may just have not scouted a expansion spot yet? The clock is ticking and supply is going up. How does it play on Monty Hall or some crazy shit for the first time?
Etc etc. That stuff will make an "all around" BW AI that beats top humans on the level chess engines do, or as AlphaGo is very likely to continue doing, very difficult.
Now if they make the AI just a one base BBS or 4 pool + drones killing machine on standard maps it recognizes then I see success being plausible quickly... probably now even, but I don't think google is trying to win that way. Guessing they have loftier ideas for their AI and what they want it to symbolize/accomplish.
All that said, it's more than possible and it would be cool to see it happen someday sooner than expected.
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :
1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.
3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.
4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).
5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.
6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.
7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.
For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.
If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.
Happy to provide reference articles list if required.
On March 11 2016 22:47 MyLovelyLurker wrote: I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :
1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.
3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.
4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).
5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.
6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.
7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.
For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.
If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.
Happy to provide reference articles list if required.
could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?
in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).
Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).
On March 10 2016 23:36 Pandemona wrote: Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!
I think a lot of programming would be required to make it work, but it is definitely possble.
On March 11 2016 22:47 MyLovelyLurker wrote: I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :
1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.
3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.
4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).
5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.
6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.
7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.
For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.
If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.
Happy to provide reference articles list if required.
could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?
in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?
The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
Just saying that because he wants to hype the event. I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).
Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.
He's gonna get his ass handed to him in a not so pretty fashion.
As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.
If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).
Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).
This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.
' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
On March 11 2016 22:47 MyLovelyLurker wrote: I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :
1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.
3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.
4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).
5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.
6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.
7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.
For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.
If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.
Happy to provide reference articles list if required.
could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?
in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?
The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com
Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
On March 13 2016 13:22 evilfatsh1t wrote: is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.
People also discuss exactly what restriction to set on the computer, if any.
And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
On March 11 2016 22:47 MyLovelyLurker wrote: I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :
1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.
2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.
3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.
4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).
5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.
6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.
7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.
For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.
If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.
Happy to provide reference articles list if required.
I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time. For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
On March 13 2016 01:11 DuckloadBlackra wrote: Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily. It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way. All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing. IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.
"Now I will do a mid-game 2-or-3-base attack." "This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"
I would like to see the AI learn to BM, that would probably be the only real challenge
Can you imagine the pros streaming them playing Ai all the time, it be so exciting to watch them get completely out played like they are vsing some noobs
I'd love to see an AI vs pro series but in SC1/BW the big problem with the mechanics would hinder a human vs an AI heavily
Even though I still believe SC1 was the superior game technically, SC2 might be a better candidate due to the (imo anyway) slightly lower APM needed which would give the human less of a technical disadvantage due to limited APM and make it more of a challenge for the AI programmers to get the strategy side close enough to beat the best humans (obviously the AI's macro would be better, and perfect micro, but spellcasting and strategy would be an issue for any AI)
I also think say, Maru or sOs as the master strategist/metagame/micro types would be a better pick for the human side than a technical grinder like herO or Innovation imo who make fewer mistakes and have better technical play but have less strategic genius
I'd love to see Maru and sOs and a zerg like maybe Solar play mirrored matchups against the AI of all 3 races in a best of x series, would be a fantastic PPV-type event if it happened
Flash and Jaedong in sc1 would be great too but I feel like the humans would struggle more due to the 'lol enjoy only controlling one small group of units at a time' micro plus the extra time needed to manage your economy in sc1
They have already commented on how it would have to be set up "realistically". As in APM cap, fog of war, attention to areas, limit on hotkeys, etc. So no absurd medivac marine vs stalker micro like those AI bots while microing perfectly back home. It'll certainly be a challenge for the developers... P.S. didnt they say SC1? Either broodwar or vanilla? and NOT SC2?
A real team of pro researchers like at google could make a bot that plays a level of brood war no one has really seen before. A bot with perfect micro has a pretty large advantage over any human being. Starcraft is a pretty complicated game even the best current bots have lots of flaws and patterns. It would be mighty tough to beat flash or jaedong at brood war no doubt about it especially in long sets where they could exploit weaknesses. I think the AI will struggle with decisions and not falling into predictable patterns. Humans will exploit its patterns like expand timings, and stuff. It might pick illogical or inefficient builds that humans have figured out over decades of play. The bot would need a rock solid early game against all kinds of rushes, and would need perfect responses to not fall behind. It would need to respond to all the possible builds in the right way even ones you cant prepare for. It would be a tough challenge for AI no doubt about it The fact their AI beat a pro at go though makes me think they could do it. GO was a very tough game and its amazing feat to beat a pro player at that. I think they can build a pro level bot capable of beating any human if they try
On March 15 2016 09:43 Moose1 wrote: but what race would the ai choose?
I think this is a very good topic to discuss. Which race would benefit most if you had "unlimited apm"?
Dragoons are amazing units with perfect micro. There are existing bots like overmind that played zerg and its pretty much for sure that stacked mutalisks are crazy strong. Properly microd carriers might be OP too. Some of the best current bots actually just go carriers. Can their AI beat flash with all 3 races? That would be even more impressive if it can learn to win with all 3 races and learn all the matchups
I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110... I'll continue following this discussion if/when computers start beating humans at basketball
On March 17 2016 17:43 MrMischelito wrote: I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110... I'll continue following this discussion if/when computers start beating humans at basketball
On March 17 2016 17:43 MrMischelito wrote: I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110... I'll continue following this discussion if/when computers start beating humans at basketball
most ghetto post of this thread
I think some things go over the head of some people.
A conservative lower bound on the state space of brood war is 10^1685. This is many orders of magnitude above the state space of Go, which is 10^170. Whats more, the branching factor is 10^50 to 10^200, compared to <360 for Go.
Holy shit, thanks for that info. You seem to be knowledgable in that field. I wondered about the complexity of that game my entire life tbh. This blows my mind.