Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.
sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.
I mean the problem is if Flash's wrist can keep up with him. He's playing BW "casually" and he already say that his arms hurt, to return to his former bonjwa self would be hard considering his injury.
On March 11 2016 00:42 BisuDagger wrote: hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.
btw does broodwar has a hardlimit for command accepted per frame? like 12 per frame or something.
I wouldnt be suprised if there was, at least a limit to buffering
On March 11 2016 02:03 Goolpsy wrote: Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.
Game might seem like it, but Starcraft has far fewer strategies than chess.
I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.
The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.
Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.
Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.
But thats the point. Deepthought is not like Deep Blue brute forcing its way into the game. It is making a much more soft approach by learning to play it. The neural network architecture is not ment to brute force the game and "solve it". It learns the game by millions of games against itself and replays and learns from that, takes conclusions. Deep Blue just tried out millions of next moves when playing chess. Deep Thought learned to play the game and knows what to do where.
"not like Deep Blue brute forcing its way into the game" vs "It learns the game by millions of games against itself and replays and learns from that". I don't think you quite know what "brute force" means.
I'm only poking you a little. The technology and approach are rather different between the computer systems, but they both still work by being able to call up almost the entire Game State possibility and knowing how to process through the information. This is what computers are supremely good at, but they it also shows their limitations. Though I'd be remiss if I didn't point out that most of these public games with top pro's always benefits the Computer in one very specific manner: they can analyze all of the publicly known games of the Master, allowing adaptation to the Master's play style.
Just the unit efficiency and micro possibilities for the AI are insane. That alone would make it insanely difficult to beat because you're guaranteed to always be outmacroed and viciously destroyed in any engagements involving similar numbers. There are videos out there of 12 goon vs 12 goon fights where the AI wins with 12 goons alive.
Some form of APM cap seems fair in my book, or else the AI isn't winning because it is making better decisions than the human, it's winning because it's exploiting micro in a way no human could ever imagine.
I depend what they want to do, capping the APM would be a lot more interesting. If not of course you would win a game where you dodge every single stalker shot by hopping in and out of a medivac, in fact just doing a worker rush would probably work, but that is not realy extraordinary.
there's no way a human would beat an AI in starcraft 2. take the micro bot that someone wrote here on TL as an example. SC2 has a theoretical notexistant skill gap which is beyond human reach. dunno about bw.
And that's why there should be a cooldown on each and every action you can do in SC2. To have a predictable behavior of the units independent of the capabilities of the player to exploit certain things. It would be so much easier and satisfying to balance, for instance, banelings if marines were unable to split.
This match would have number of interesting implications chief among witch are BO decisions. If AI is strictly superior microing units theres no reason not to assume that it will try taking advantage of this and go for 1 base all-ins most of the time in order to force micro intensive early games. It would be cool if the AI has some doubt about his opponents skill and actually needs to confirm its superiority in micro in order to feel confident in winning and going for all ins. Humans already do that as we all know from Bisu being annoying as much as possible with his scouting probe. Now imagine AI controlling this, it will never die by mistake.
I recall some specific types of AI techniques used for mutalisk micro in one of the BWAI competitions. I wonder if the point of the deepmind project is to find applications for neuralnet type algorithms or whether they would be okay with discarding neural networks if other ai techniques would work better. I guess it is the former, since that must be the reason google funds it, so that eventually they can have self driving cars and smarter search results.
I think neural networks are used by the planetary annihilation ai designer, I used to read some of his explanations for why that was the future of rts ai, but I don't know if he is still working on similar things or where one can find information on this. iirc the ai could learn how to micro by playing against itself over and over
On March 10 2016 23:35 B-royal wrote: What will be the most difficult in my opinion is to have the AI make decisions such as where to attack, when to attack, multi-pronged attacks, when to get certain units and how to use spells such as dark swarm properly. It seems to me like it would be fairly easy to trick and abuse the behavior of the AI.
I've already played a bot on BW and engagements were actually the bot's biggest strength. It's constantly scouting with zerglings, has an incredible overlord spread for a perfect map awareness which allows to always perfectly surround your army or set the best possible concave possible.
As for stuffs like swarms, how many times did your defilers die before casting the swarm because you weren't selecting/clicking fast enough?
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
Learning by iterations and bruteforcing is not the same thing. On the same note; Chess is far from solved.
There is a continuous evolution and progression of chess engines, the best one currently being 'Komodo'.
From the Wiki: "Komodo heavily relies on evaluation rather than depth". I've been following the TCEC tournaments and heard the developers talk about certain aspects of the making of AI.
Evaluation is derived not from knowing that a certain position withs 58% of the games, but by piece value and pieceplacement + structures.
If you have a chain of pawns (x= pawn, o = empty space)
oxo or oox or oxo xoo xxo xox
They do not have a evaluation of 3 (1 per pawn), but more than 3. In the same way, having an isolated pawn (A pawn with no friendly pawns on either adjecent diagonal), it is usually very weak and worth less than 1.
Why is this important ? Being able to evaluate a situation through dividing out the area/map is almost directly transferable to Starcraft. It can be used for unitplacement to scout, defending chokepoints, building placement or engagement angles (without too much processing even).
As for incomplete information; it is not all that hard. you don't need to know all units produced to calculate possible strategies available or techtrees possible at a current time. I know it seems like it, but you don't have complete information in either chess or Go. (not that is it not a game of complete information, as you can't hold all possible iterations and payoff thereof in memory, )
For this to be an interesting challenge for DeepMind the AI would have to be limited to 400 APM and it would have to interact through a virtual keyboard and mouse, i.e. it would have to actually drag the cursor to box units, etc, so it can't do micro that is (in principle) impossible for humans to do.
On March 11 2016 03:44 The_Red_Viper wrote: See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone. It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.
Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.
But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
I suspect there would be a variety of cheese tactics where the AI could simply out-micro the players to such an extreme degree that they can always win in the first 10 minutes.
A conservative lower bound on the state space of brood war is 10^1685. This is many orders of magnitude above the state space of Go, which is 10^170. Whats more, the branching factor is 10^50 to 10^200, compared to <360 for Go.
On March 11 2016 02:30 Cuce wrote: Flash hit the key point I think.
Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.
sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.
Starcraft is what we call a POMDP (partially observable markov decision process), there are algorithms for solving these types of problems, for example recurrent neural networks, but no one has tried applied it to something as complex as a full game in Starcraft.