|
Norway28553 Posts
I'm not talking about build orders, and I'm not saying AI's won't eventually be able to out-strategize players. I'm saying that they're already capable of winning without mechanical limitations, and that winning without those limitations is much further away. I also think sc2 is gonna be much easier than bw because sc2 units move way more streamlined and you don't have the ridiculously OP stuff like dark swarm+lurker that I think an AI would have a harder time with. I think in SC2, AIs should be able to win even with mechanical limits in the foreseeable future, but there's stuff in brood war that I think is seriously hard as fuck to program - much harder than anything we've seen from any turn based games. I guess maybe what the AI does is that it turns brood war into a turn based game with 20 turns happening every second though.
Anyway, say for example you have an engagement, 3 templar 12 zealot 9 goon vs 30 hydra. Sure, you can make the ai spread perfectly and dance perfectly and constantly target the weaker units and perfectly dodge the storms and then he'd win. But say there's a protoss AI against the zerg AI and the zerg AI is dancing perfectly and spreading perfectly; how does the protoss AI decide what the perfect time to throw down the storm is? Will they even use storm, seeing how perfect storm-dodge makes it much worse? Will it cover a perfect 3-storm area at the same time so that dodging becomes semi-impossible? Will the zerg AI adjust to this? In a Zvt late game battle, how does the AI calculate whether to plague 4 science vessels vs throw down a dark swarm saving 2 lurkers? Once again, if the AI has perfect mechanics, so it can consume with 4 different defilers in 4 different map screens while harassing with mutas, sending scourge vs vessels, perfectly macroing and lurker+ling+defiler dropping empty spaces, then it'll be invincible through mechanical ability, but if there are mechanical limitations and the AI needs to calculate what operations it should skip, it becomes incredibly, incredibly complex. For me, as a human player, I approach a battle differently depending on the composition that I have and that my opponent has. If I have a muta+hydra army against a protoss goon+templar army, then the calculation of whether I want to suicide my mutas into the templar is much more complex than 'I have 10 mutas, that 1-shots templars, so I can 1 shot the templars then attack with hydras', there are hundreds of small calculations like 'okay, now the templar deviated 2 cm to the left side of the goon army, I have a chance to snipe it NOW'.
Marine vs lurker+ling with human apm is the same thing. With infinite apm, then it becomes ridiculously easy for the terran. But if he's stuck with say, 6 actions per second and has to calculate which marine the lurker spine is more likely to target to move that one away from the rest of the group, if he has to calculate whether to focus fire on the flanking lings in the back or the lurkers in the front, if he has to calculate whether to build marines at home or whether to micro the battle, if the AI actually has to make all the decisions that humans have to make because we limit its ability to simply do 20 times more than a human can do, then I think we're looking at something that is ridiculously complex. Or reverse it, look at lurker+ling vs marine+medic. As a zerg, that's the kind of scenario where I try to distract the terran by attacking some other area (just to make him look there) before engaging, because if the terran doesn't pre-stim and pre-position, then the lurkers running in and burrowing next to the marines kills them all. How will the AI deal with that, if there's a limitation to how many places it can be at the same time? Will it evaluate that 'this attack is a distraction' (if you play pvz against mondragon, you actually learn that you should ignore the first attack, because the first attack is always a distraction) and focus on the second one?
There are literally hundreds of small scenarios like this where I think an AI is gonna have an incredibly hard time if the numbers of operations it can execute is limited to match that of the human player it faces off against. Of course, it can be programmed, sometime in the future, by a programmer who has progamer knowledge. But if you look at all the possible positionings of all the possible unit combinations on all the possible maps against all the different possible unit combinations, then we're looking at a go squared squared type of amount of possible options.
|
There is also built in limits for humans in games like brood war, the 12 target max for example, these would not apply to a limitless apm bot and thus giving them an unfair advantage right? i mean if the AI should beat a human fairly it should also include human reacting times and time for moving fingers and all these things limiting a human from doing exactly what they think when they think it.
im curious if people with more knowledge about modern AI can answer if a bot can extract data from 10 000 replays in a day is that equivalent of playing 10 000 games per day for a human?
|
So is it really confirmed because there's no news at all.
I dont really see how anyone can beat AI. Yes there are lots of possibilities because of the real time factor and the fog of war. But most possibilities are inefficient or/and unimportant.
I don't know the extend of the AI capability but it can get timings right to seconds for them to make any important decisions.
|
Interesting read and video: https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/
+ Show Spoiler +
And here are some quotes from reddit people have said:
Deepmind will limit its APM to 200 from what I have heard
They said they will limit both input and output APM.
What do you mean by "input APM"?
They said something like only refreshing game state 15 times a second (to simulate reaction time) instead of 60 times per second.
Oh, and apparently they reworked the AI since the Lee Sedol match last year. The new version was the one that kept playing against itself to learn from itself.
|
The interesting thing about deep mind is that they take a very hands off approach with AI where they try to get it to learn on its own instead of using expert data. That's why their research is so interesting.
|
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.
However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.
EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.
|
It's funny many think we will be outsmarting the AI with gimmicks and unorthodox play when in both Chess and Go it is the AI that showed us the best of both of these.
Google has since managed to take Ke by surprise: “There was a cut that quite shocked me, because it was a move that would never happen in a human-to-human Go match,” he said.
lol:
Earlier this year, Google secretly let the improved AlphaGo play unofficially on an online Go platform. The AI won 50 out of 51 games, and its only loss was owed to an internet connection timeout.
|
On May 27 2017 21:55 XenoX101 wrote: One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.
However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.
EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.
Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).
The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.
|
On May 28 2017 00:16 LetaBot wrote:Show nested quote +On May 27 2017 21:55 XenoX101 wrote: One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.
However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.
EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups. Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time). The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.
Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.
As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.
|
It's interesting that in the turn based games it plays these godlike moves but in a real time game we have efforts to make it seem more human-like. That's good though, then it can actually teach *us* something about the game.
|
On May 28 2017 01:04 XenoX101 wrote:Show nested quote +On May 28 2017 00:16 LetaBot wrote:On May 27 2017 21:55 XenoX101 wrote: One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.
However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.
EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups. Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time). The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet. Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human. As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.
Well, the AI is trained playing against itself, but I assume it's also tested against various build orders which the AI itself wouldn't necessarily deploy. This should include a bunch of inefficient or unorthodox builds as well, in my opinion. Also, if the AI doesn't perform well enough against suboptimal play, then it may start deploying these very strategies against itself (upon seeing that they are effective) – which would lead to it playing against it quite often, and learning to respond appropriately. Also, a proper neural network should, while learning, in my opinion, make the generalizations that you or I make when playing the game. So even if you present it with something unexpected which it hasn't played against a lot – which is unlikely in itself, it probably won't "break down" and start doing completely stupid stuff. It will probably do what it learned is best if it doesn't know what it's up against: scout, try to identify enemy tech and react accordingly, while creating workers, units, and probably playing a bit safer. Ultimately, suboptimal play is suboptimal, so it can be countered just by playing safe in most cases.
|
On May 28 2017 00:16 LetaBot wrote:Show nested quote +On May 27 2017 21:55 XenoX101 wrote: One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.
However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.
EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups. Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time). The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.
Blizzard already confirmed that the API will allow AIs to play the game as slowly/fast as they want and obviously, unless someone is watching the game, there is no rendering necessary so that's a major part of the workload for every tick of the game that's removed. So now the only limit is computer power which we know google has heaps of.
Btw the API's expected functionalities have been documented here for anyone caring to take a look : Specs
Update 1
Update 2
From the specs one of the most interesting parts is this : The ability to load a replay and examine the state of the game as it plays.
I'm counting on AIs to point mistakes in my play. Actually I'm actively working on that kind of system
|
Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too
|
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. You do know GO is also to complex for any computer to brute force just like starcraft?
FYI ,GO has 2.08168199382×10170 + Show Spoiler +208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 legal positions.
|
On May 28 2017 03:51 sertas wrote: Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too This in the grand scheme of things, will be actually the least impressive of all if they manage to create a world class starcraft bot.
|
France12758 Posts
On May 28 2017 04:23 sabas123 wrote:Show nested quote +On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. You do know GO is also to complex for any computer to brute force just like starcraft? FYI ,GO has 2.08168199382×10170 + Show Spoiler +208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 legal positions. It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things. Even in chess they don't use pure brute force.
|
In a few years the best Starcraft player in the world will be an AI.
Some years later the smartest person on the planet will not be a person.
This in the end for humanity.
|
France12758 Posts
On May 28 2017 05:24 MockHamill wrote: In a few years the best Starcraft player in the world will be an AI.
Some years later the smartest person on the planet will not be a person.
This in the end for humanity. No? Every time there is a bit of improvement in AI, people oversell it but it never fails to disappoint and nobody wants to hear anything about it for the next few years  Don't be fooled by the hype
|
On May 28 2017 04:52 Poopi wrote:Show nested quote +On May 28 2017 04:23 sabas123 wrote:On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. You do know GO is also to complex for any computer to brute force just like starcraft? FYI ,GO has 2.08168199382×10170 + Show Spoiler +208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 legal positions. It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things. Even in chess they don't use pure brute force.
This comment is puzzling. Chess engines were beating the best chess players while being very bad at Go, exactly because of the number of game states/combinations.
A chess engine doesn't naively start to sample possible moves. But it does find the move it makes by calculating a lot of moves, including stupid moves that humans would instinctively reject.
The way you seem to refer to a 'brute force' algorithm is suggesting that an algorithm that is not brute forcing a solution is not significantly affected by the size of the game states. I was not trained as a computer scientists, so to me a brute force algorithm is an algorithm that naively explores part of the possible solutions and relies on huge computational power to get to a meaningful result. But that is not what is technically known as a brute force algorithm. But your comment suggest that, like me, you want to use a wider definition.
This opposed to an algorithm that uses a neural network that has already been trained to quickly come to some solution, using 'instinct'/pattern recognition, rather than partial naive sampling of a huge area of the solution space.
The fact is that a chess engine uses many tricks to limit the number of moves it considers. And that relies on sheer computational power. The fact is that a Monte Carlo uses tricks so it quickly converges on the correct solution, given that the phase space is small enough so that the computational power available can come to a solution in time scales that are acceptable. That to me still makes them brute force methods. But like I said, I realize I am not agreeing with the accepted definition.
But you seem oblivious to this entire debate, while accusing everyone else of being oblivious. Puzzling indeed.
As for the number of game states in an RTS, I wonder if making tiers and course graining them will work. And if that is used, how many significantly distinguishable game states there really are. Because to me, in RTS, many games will follow a similar general pattern. On the macro-level, once the opening has stabilized, there is only so many different game states. You can be ahead in economy, equal, or behind. Same for tech and army size. The fine details about unit positions often will not matter. You can spread the units those players have across the map in many different ways, But those are pointless game states. Atfer a Siege expand in TvP, both the terran and the protoss will have a certain number of units. And protoss will have map control. And terran will have a limited number of spots where the tanks can be, either sieged or unsieged. Given a slight deviation, either player can probably move their units into the optimal position, without any penalty. It is like a TvP siege expand game will always move through the same game state, almost always.
|
France12758 Posts
On May 28 2017 07:39 Ernaine wrote:Show nested quote +On May 28 2017 04:52 Poopi wrote:On May 28 2017 04:23 sabas123 wrote:On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. You do know GO is also to complex for any computer to brute force just like starcraft? FYI ,GO has 2.08168199382×10170 + Show Spoiler +208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 legal positions. It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things. Even in chess they don't use pure brute force. ... I didn't get what you didn't get from my post. One user said: "But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them." which seem really really weird because Go actually has a shitton of # positions, yet was still handled. So it seems like this user thinks we "solve" games using magic or by just evaluating every possible move, which is a rather impressive view in 2017.
|
|
|
|