|
Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).
|
i gotta agree with lordsaul, we've all played computer on hardest setting (insane) and felt it was about gold league at best. As programmers continue to add fields to the AI which increase it's scope, depth, and understanding of the game. It will continue to get harder, and things like insane macro mixed with good harass and micro will make it harder each time, especially if the programmer is an accomplished sc player who is watching the games vs pro and ai, then can continue to make little adjustments accordingly, it's only gonna get harder.
|
rockslave you are insane if you think starcraft isn't more complex than any game in existence other than the ones that the bankers and corporations are playing with Global Economies and Governments. sc2 has infinitely more tech paths than chess or go did u forget to take your medications or something?
|
On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).
I think you are extremely underestimating the difference between a real time and turn based game.
For example a chess game can be represented very quickly: + Show Spoiler + 1. e4 e5 2. Nf3 d6 3. d4 Bg4 4. de5 Bf3 5. Qf3 de5 6. Bc4 Nf6 7. Qb3 Qe7 8. Nc3 c6 9. Bg5 b5 10. Nb5 cb5 11. Bb5 Nbd7 12. O-O-O Rd8 13. Rd7 Rd7 14. Rd1 Qe6 15. Bd7 Nd7 16. Qb8 Nb8 17. Rd8#
This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate.
Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess.
Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game.
Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders.
I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go.
The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive.
|
On March 17 2016 03:19 Monochromatic wrote:Show nested quote +On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like). I think you are extremely underestimating the difference between a real time and turn based game. For example a chess game can be represented very quickly: + Show Spoiler + 1. e4 e5 2. Nf3 d6 3. d4 Bg4 4. de5 Bf3 5. Qf3 de5 6. Bc4 Nf6 7. Qb3 Qe7 8. Nc3 c6 9. Bg5 b5 10. Nb5 cb5 11. Bb5 Nbd7 12. O-O-O Rd8 13. Rd7 Rd7 14. Rd1 Qe6 15. Bd7 Nd7 16. Qb8 Nb8 17. Rd8#
This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate. Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess. Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game. Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders. I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go. The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive.
You can't really use chess to descripbe Alphago-type of AI, however. You can't even compare chess with go as go is million time more complex than chess. To compare chess with go is like you compare the easiness of landing on the moon with landing on other solar system's planet.
I feel like many people misunderstand what is Alphago-type AI is or how it works. Alphago is not AI that is hard coded to response to human's move or memorize pattern. If that's is its ability then it would not be able to defeat Lee Sedol in even a single game of go which has possible moves more than number of atoms in the universe. Considering the opening move of go has 361 position, the first five turns of go would come up with the total possible moves of 5,962,870,725,840. There's no way an AI can do that in a reasonable amount of time, yet Alphago has pretty much put in (at least) two moves that would go down in go's history as "God's Hand" that even 9-professional dan pros are in awe in less than 5 minutes. There is one move that it decides to play even it recognized that the chance of human playing this move is less than 1/10,000 but it decides that the human pros would be wrong.
The AI is not coded to just copy human move but it is coded to "learn the game" instead. This means that it learn to play the game like human would, by experimenting and learn from its mistake. It would learn to use heuristic to simplify their "thought process" and then reinforcing their decision making process by continuing practices the moves (or if in starcraft terms: build order, or army movement, positioning, etc.) millions of times to reduce the time they need to make a good move. It is build to learn how to play and with every game it will improve its decision making process. After the game, even Deepmind's people don't know why Alphago made some moves because
So if Deepmind decides to code their AI-system to learn Starcraft, it would not be putting in hard coding to to make it just respond to build order or blindly builds something. It will learn how to scout, how to count buildings and workers and predict what the build order would be. It will learn to recognize scouting patterns and timing that the scout arrive at the base to decide whether it is possible that the opponent is proxying or not. Then it will play out the possible scenarios (which is less complicated than go by a lot) ahead. And it will do by practicing with itself millions of times per day to find out what would the possible responses that would lead to a win be and learn from every single game. That's the scary part of this type of AI.
|
On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).
what?
starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board.
"tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines?
now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine.
just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess. /edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board"
|
On March 17 2016 04:53 necaremus wrote:Show nested quote +On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like). what? starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board. "tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines? now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine. just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess. /edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board"
1 rax vs 1 rax is far from the complexity of Go because the position of each unit is not as important and can be simplify to possible area that units can be that would give similar results. A marine on a pixel away would not means a lot in the local fight, let alone the bigger picture of the whole game. In contrast, a single move of a stone from one point to another in go can make a difference between a win and a loss of a game.
And Alphago-type of AI is taught to think and use heuristic similar to human's thought process. So it will be able to simplify and use shortcut to their thinking that would make their decision process fast enough.
|
On March 17 2016 04:25 Veldril wrote:Show nested quote +On March 17 2016 03:19 Monochromatic wrote:On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like). I think you are extremely underestimating the difference between a real time and turn based game. For example a chess game can be represented very quickly: + Show Spoiler + 1. e4 e5 2. Nf3 d6 3. d4 Bg4 4. de5 Bf3 5. Qf3 de5 6. Bc4 Nf6 7. Qb3 Qe7 8. Nc3 c6 9. Bg5 b5 10. Nb5 cb5 11. Bb5 Nbd7 12. O-O-O Rd8 13. Rd7 Rd7 14. Rd1 Qe6 15. Bd7 Nd7 16. Qb8 Nb8 17. Rd8#
This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate. Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess. Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game. Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders. I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go. The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive. You can't really use chess to descripbe Alphago-type of AI, however. You can't even compare chess with go as go is million time more complex than chess. To compare chess with go is like you compare the easiness of landing on the moon with landing on other solar system's planet. I feel like many people misunderstand what is Alphago-type AI is or how it works. Alphago is not AI that is hard coded to response to human's move or memorize pattern. If that's is its ability then it would not be able to defeat Lee Sedol in even a single game of go which has possible moves more than number of atoms in the universe. Considering the opening move of go has 361 position, the first five turns of go would come up with the total possible moves of 5,962,870,725,840. There's no way an AI can do that in a reasonable amount of time, yet Alphago has pretty much put in (at least) two moves that would go down in go's history as "God's Hand" that even 9-professional dan pros are in awe in less than 5 minutes. There is one move that it decides to play even it recognized that the chance of human playing this move is less than 1/10,000 but it decides that the human pros would be wrong. The AI is not coded to just copy human move but it is coded to "learn the game" instead. This means that it learn to play the game like human would, by experimenting and learn from its mistake. It would learn to use heuristic to simplify their "thought process" and then reinforcing their decision making process by continuing practices the moves (or if in starcraft terms: build order, or army movement, positioning, etc.) millions of times to reduce the time they need to make a good move. It is build to learn how to play and with every game it will improve its decision making process. After the game, even Deepmind's people don't know why Alphago made some moves because So if Deepmind decides to code their AI-system to learn Starcraft, it would not be putting in hard coding to to make it just respond to build order or blindly builds something. It will learn how to scout, how to count buildings and workers and predict what the build order would be. It will learn to recognize scouting patterns and timing that the scout arrive at the base to decide whether it is possible that the opponent is proxying or not. Then it will play out the possible scenarios (which is less complicated than go by a lot) ahead. And it will do by practicing with itself millions of times per day to find out what would the possible responses that would lead to a win be and learn from every single game. That's the scary part of this type of AI.
The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.
Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.
|
Yea, I was wondering if there was a way to speed up SC:BW games, so DeepMind could have faster games ... Maybe they have a reasonable approach with many games running parallel. Don't know if that would work. It'd probably be a novelty, eh?
Wish they gave some insight. This is so exciting.
|
On March 17 2016 05:03 Veldril wrote:Show nested quote +On March 17 2016 04:53 necaremus wrote:On March 17 2016 02:29 rockslave wrote: Please stop saying StarCraft is more complex than Chess or Go.
If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.
The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like). what? starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board. "tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines? now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine. just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess. /edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board" 1 rax vs 1 rax is far from the complexity of Go because the position of each unit is not as important and can be simplify to possible area that units can be that would give similar results. A marine on a pixel away would not means a lot in the local fight, let alone the bigger picture of the whole game. In contrast, a single move of a stone from one point to another in go can make a difference between a win and a loss of a game. And Alphago-type of AI is taught to think and use heuristic similar to human's thought process. So it will be able to simplify and use shortcut to their thinking that would make their decision process fast enough.
the position of each unit is not as important? if you have the same number of marines (for example 5 vs 5), but one "player" has his marines positioned, so that all 5 can focus fire a target at the same time, in the same instant, while the other one has 3 marines in front an 2 marines behind -> only 3 marines can fire on initiation of the fight, the other 2 will join the "2nd round" of the fight. who do you think wins this fight?
clearly the one who uses all 5 marines in the instant the fight starts. (i would guess he would be left with 2-3 marines, while the other one has none left)
and this is only 1 tile difference in position: imagine you used a marine to scout! he would never be able to join the fight, making it essential a "4v5" although both players would have the same amount of possible marines.
/edit: and i didn't even consider the layout of the map, concrete: line of sight. put 5 marines on top of a ramp and try to break through with 5 marines... good luck.
|
On March 17 2016 05:26 loppy2345 wrote: The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.
Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.
this is not entirely true. to be honest: it is far from reality: you ever heard of parallel processing? the AI could play multiple games at the same time (just like they did with go), and they could easily adjust the game-speed to something faster, if they wished.
/edit: they only need a smart way to merge the data they got out of the games :edit/
the question is: do they have the resources to do so? for go they used about the energy of a middle big city over a few month.
if they want to compete in starcraft, they would have to expand this.
|
Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win.
|
On March 17 2016 08:25 Quesadilla wrote: Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win. Agree, but with restrictions the test wouldn't make sense.
|
People who think that ai's that play chess/go are doing it differently to each other are hilarious. As is the insecurity that all go players seem to display whenever they feel compelled to tell the world just how many possible combinations there are.
Everyone thinking that Starcraft is unsolvable for an ai has completely missed the point that these ai's aren't memorising every possible option in chess/go (completely impossible with current technology), they're simply learning the game. Starcraft is significantly less complex then both go and chess, and to assume that asinine things like all possible positions of a marine on a map is going to make a significant impact is laughable. In fact, given how quickly machine learning operates, I'd bet that positioning is probably the very first thing that is mastered by it in the grand scheme of strategy.
|
i'd be impressed and hope to see within my lifetime.
hell, even i'm confident i can beat whatever AI available at the moment but obviously, this is talking about future.
i just cant fathom how the AI would work to understand a game like starcraft and be able to decide what to do.
|
Also I think the map choice is going to seriously mess up the AI, if it was a plain map with no obstacles, etc..., then it would obviously be a lot easier for the AI than a map with lots of cliffs. It would be very easy to design maps that would completely screw over the AI, whereas obviously humans would be able to understand the map a lot quicker.
I think it's definitely possible to develop an AI that could beat the best pro's consistently, but would probably take a team of 10 world class programmers 20 years or so to do it, and that's not really worth the money and effort.
|
On March 17 2016 08:39 bo1b wrote: People who think that ai's that play chess/go are doing it differently to each other are hilarious. As is the insecurity that all go players seem to display whenever they feel compelled to tell the world just how many possible combinations there are.
Everyone thinking that Starcraft is unsolvable for an ai has completely missed the point that these ai's aren't memorising every possible option in chess/go (completely impossible with current technology), they're simply learning the game. Starcraft is significantly less complex then both go and chess, and to assume that asinine things like all possible positions of a marine on a map is going to make a significant impact is laughable. In fact, given how quickly machine learning operates, I'd bet that positioning is probably the very first thing that is mastered by it in the grand scheme of strategy.
Well, the AIs that play chess and Alphago are doing thing completely different, though. In chess, AIs can look for combination of the overall board positions without using Alphago's policy technics but Go is too complex for that. If the same type of AIs that are used in chess can work in go then the top go players would be defeated by AI a long time ago (no human can win against AI in chess since around 2006-2007). It is not really an insecurity if it is a fact that is back up by concrete evidences.
On March 17 2016 06:46 necaremus wrote:
the position of each unit is not as important? if you have the same number of marines (for example 5 vs 5), but one "player" has his marines positioned, so that all 5 can focus fire a target at the same time, in the same instant, while the other one has 3 marines in front an 2 marines behind -> only 3 marines can fire on initiation of the fight, the other 2 will join the "2nd round" of the fight. who do you think wins this fight?
clearly the one who uses all 5 marines in the instant the fight starts. (i would guess he would be left with 2-3 marines, while the other one has none left)
and this is only 1 tile difference in position: imagine you used a marine to scout! he would never be able to join the fight, making it essential a "4v5" although both players would have the same amount of possible marines.
/edit: and i didn't even consider the layout of the map, concrete: line of sight. put 5 marines on top of a ramp and try to break through with 5 marines... good luck.
As long as one pixel/tile differences would not lead to a different result, then those differences would not matter and would be heuristically group into clusters of positions instead. When pro players play, they don't think about positioning each unit on each tile, they think about position units in a general area as long as that cluster of area is where the units should be in. Alphago-type AI is also taught to think this way like human so it will learn how to emulate how human pro think but will be made to think faster.
Beside, losing a group of marines do not matter at all in the bigger picture. If by sacrificing a group of units would lead to a better game positions (i.e. strengthening overall board positions in Go or opening up counter attack path to base in Starcraft) then AI would be willing to sacrifice units. It will also learn how to react and what can it do to maximize its chance to win if units are caught out of position.
On March 17 2016 05:26 loppy2345 wrote: The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.
Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.
That's true but I would say that if the AI can use parallel processing to learn, then it could play more than 144 games a day.
On March 17 2016 09:18 loppy2345 wrote: Also I think the map choice is going to seriously mess up the AI, if it was a plain map with no obstacles, etc..., then it would obviously be a lot easier for the AI than a map with lots of cliffs. It would be very easy to design maps that would completely screw over the AI, whereas obviously humans would be able to understand the map a lot quicker.
I think it's definitely possible to develop an AI that could beat the best pro's consistently, but would probably take a team of 10 world class programmers 20 years or so to do it, and that's not really worth the money and effort.
You can say that with any game though. What is the point of making AI that beating people in chess and go? What was the point of spending billion in building Alphago? The point is that it is for research on how human learning process work, then people will do it, like what happen with Alphago. If by beating pro Starcraft players by AI would give us answer how people decision making process or learning process in using asymmetric information is, or improve the decision making process of AI; then people will do it.
|
|
On March 13 2016 03:24 Charoisaur wrote:Show nested quote +On March 13 2016 03:17 lordsaul wrote:I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine * Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot * Marines that always maintain their range advantage on roaches * Tanks that always target the banelings first * Marines that always perfect split v banelings (you can find that online already) * Weak units that always rotate out of the front line * Medivacs healing the most important target in range, rather than the closest * Perfect charges vs tank lines (single units charging ahead of the main attack * ...to name a very few basic micro tricks And while all this happens, perfect macro? Humans overestimate themselves  . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range. Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure people don't underestimate that. they know the AI would have to be limited for it to be a fair challenge. the point is to show that bots are more intelligent then humans not that they have better mechanics.
This was never the point? Certainly as far as chess engines go, they are superior simply because they can brute force calculate in the way that humans can't. Humans have to "teach" engines strategy by assigning values to various strategic aspects.
The brute force calculation power of machines in chess/go I would say is roughly the equivalent to mechanics in SC2. It's part of the deal.
|
On March 17 2016 08:38 StarscreamG1 wrote:Show nested quote +On March 17 2016 08:25 Quesadilla wrote: Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win. Agree, but with restrictions the test wouldn't make sense. Why not? I'm pretty sure it's easier to create an AI that can beat the best SC players with humanly impossible mechanics than it is to make an AI that can beat them strategically with human-like mechanics. But the 2nd one is way more interesting.
|
|
|
|