• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 11:20
CET 17:20
KST 01:20
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
Weekly Cups (Jan 5-11): Clem wins big offline, Trigger upsets4$21,000 Rongyi Cup Season 3 announced (Jan 22-Feb 7)15Weekly Cups (Dec 29-Jan 4): Protoss rolls, 2v2 returns7[BSL21] Non-Korean Championship - Starts Jan 103SC2 All-Star Invitational: Jan 17-1833
StarCraft 2
General
SC2 All-Star Invitational: Jan 17-18 Stellar Fest "01" Jersey Charity Auction Weekly Cups (Jan 5-11): Clem wins big offline, Trigger upsets When will we find out if there are more tournament SC2 Spotted on the EWC 2026 list?
Tourneys
OSC Season 13 World Championship SC2 AI Tournament 2026 Sparkling Tuna Cup - Weekly Open Tournament $21,000 Rongyi Cup Season 3 announced (Jan 22-Feb 7) $25,000 Streamerzone StarCraft Pro Series announced
Strategy
Simple Questions Simple Answers
Custom Maps
Map Editor closed ?
External Content
Mutation # 508 Violent Night Mutation # 507 Well Trained Mutation # 506 Warp Zone Mutation # 505 Rise From Ashes
Brood War
General
[ASL21] Potential Map Candidates How Rain Became ProGamer in Just 3 Months BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ A cwal.gg Extension - Easily keep track of anyone
Tourneys
[Megathread] Daily Proleagues Small VOD Thread 2.0 [BSL21] Grand Finals - Sunday 21:00 CET [BSL21] Non-Korean Championship - Starts Jan 10
Strategy
Soma's 9 hatch build from ASL Game 2 Simple Questions, Simple Answers Game Theory for Starcraft Current Meta
Other Games
General Games
Awesome Games Done Quick 2026! Beyond All Reason Nintendo Switch Thread Mechabellum Stormgate/Frost Giant Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas
Community
General
US Politics Mega-thread European Politico-economics QA Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine Trading/Investing Thread
Fan Clubs
Innova Crysta on Hire
Media & Entertainment
Anime Discussion Thread
Sports
2024 - 2026 Football Thread
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
My 2025 Magic: The Gathering…
DARKING
Physical Exercise (HIIT) Bef…
TrAiDoS
Life Update and thoughts.
FuDDx
How do archons sleep?
8882
James Bond movies ranking - pa…
Topin
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1396 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 24

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 22 23 24 25 26 29 Next All
Liquid`Drone
Profile Joined September 2002
Norway28733 Posts
May 27 2017 12:11 GMT
#461
I'm not talking about build orders, and I'm not saying AI's won't eventually be able to out-strategize players. I'm saying that they're already capable of winning without mechanical limitations, and that winning without those limitations is much further away. I also think sc2 is gonna be much easier than bw because sc2 units move way more streamlined and you don't have the ridiculously OP stuff like dark swarm+lurker that I think an AI would have a harder time with. I think in SC2, AIs should be able to win even with mechanical limits in the foreseeable future, but there's stuff in brood war that I think is seriously hard as fuck to program - much harder than anything we've seen from any turn based games. I guess maybe what the AI does is that it turns brood war into a turn based game with 20 turns happening every second though.

Anyway, say for example you have an engagement, 3 templar 12 zealot 9 goon vs 30 hydra. Sure, you can make the ai spread perfectly and dance perfectly and constantly target the weaker units and perfectly dodge the storms and then he'd win. But say there's a protoss AI against the zerg AI and the zerg AI is dancing perfectly and spreading perfectly; how does the protoss AI decide what the perfect time to throw down the storm is? Will they even use storm, seeing how perfect storm-dodge makes it much worse? Will it cover a perfect 3-storm area at the same time so that dodging becomes semi-impossible? Will the zerg AI adjust to this? In a Zvt late game battle, how does the AI calculate whether to plague 4 science vessels vs throw down a dark swarm saving 2 lurkers? Once again, if the AI has perfect mechanics, so it can consume with 4 different defilers in 4 different map screens while harassing with mutas, sending scourge vs vessels, perfectly macroing and lurker+ling+defiler dropping empty spaces, then it'll be invincible through mechanical ability, but if there are mechanical limitations and the AI needs to calculate what operations it should skip, it becomes incredibly, incredibly complex. For me, as a human player, I approach a battle differently depending on the composition that I have and that my opponent has. If I have a muta+hydra army against a protoss goon+templar army, then the calculation of whether I want to suicide my mutas into the templar is much more complex than 'I have 10 mutas, that 1-shots templars, so I can 1 shot the templars then attack with hydras', there are hundreds of small calculations like 'okay, now the templar deviated 2 cm to the left side of the goon army, I have a chance to snipe it NOW'.

Marine vs lurker+ling with human apm is the same thing. With infinite apm, then it becomes ridiculously easy for the terran. But if he's stuck with say, 6 actions per second and has to calculate which marine the lurker spine is more likely to target to move that one away from the rest of the group, if he has to calculate whether to focus fire on the flanking lings in the back or the lurkers in the front, if he has to calculate whether to build marines at home or whether to micro the battle, if the AI actually has to make all the decisions that humans have to make because we limit its ability to simply do 20 times more than a human can do, then I think we're looking at something that is ridiculously complex. Or reverse it, look at lurker+ling vs marine+medic. As a zerg, that's the kind of scenario where I try to distract the terran by attacking some other area (just to make him look there) before engaging, because if the terran doesn't pre-stim and pre-position, then the lurkers running in and burrowing next to the marines kills them all. How will the AI deal with that, if there's a limitation to how many places it can be at the same time? Will it evaluate that 'this attack is a distraction' (if you play pvz against mondragon, you actually learn that you should ignore the first attack, because the first attack is always a distraction) and focus on the second one?

There are literally hundreds of small scenarios like this where I think an AI is gonna have an incredibly hard time if the numbers of operations it can execute is limited to match that of the human player it faces off against. Of course, it can be programmed, sometime in the future, by a programmer who has progamer knowledge. But if you look at all the possible positionings of all the possible unit combinations on all the possible maps against all the different possible unit combinations, then we're looking at a go squared squared type of amount of possible options.
Moderator
Hemling
Profile Joined March 2010
Sweden93 Posts
May 27 2017 12:16 GMT
#462
There is also built in limits for humans in games like brood war, the 12 target max for example, these would not apply to a limitless apm bot and thus giving them an unfair advantage right? i mean if the AI should beat a human fairly it should also include human reacting times and time for moving fingers and all these things limiting a human from doing exactly what they think when they think it.

im curious if people with more knowledge about modern AI can answer if a bot can extract data from 10 000 replays in a day is that equivalent of playing 10 000 games per day for a human?

http://eu.battle.net/sc2/en/profile/246845/1/Hemligt/
ETisME
Profile Blog Joined April 2011
12632 Posts
May 27 2017 12:28 GMT
#463
So is it really confirmed because there's no news at all.

I dont really see how anyone can beat AI. Yes there are lots of possibilities because of the real time factor and the fog of war. But most possibilities are inefficient or/and unimportant.

I don't know the extend of the AI capability but it can get timings right to seconds for them to make any important decisions.
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-27 13:11:16
May 27 2017 12:40 GMT
#464
Interesting read and video:
https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/

+ Show Spoiler +



And here are some quotes from reddit people have said:

Deepmind will limit its APM to 200 from what I have heard

They said they will limit both input and output APM.

What do you mean by "input APM"?

They said something like only refreshing game state 15 times a second (to simulate reaction time) instead of 60 times per second.


Oh, and apparently they reworked the AI since the Lee Sedol match last year. The new version was the one that kept playing against itself to learn from itself.

Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
May 27 2017 12:53 GMT
#465
The interesting thing about deep mind is that they take a very hands off approach with AI where they try to get it to learn on its own instead of using expert data. That's why their research is so interesting.
Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
XenoX101
Profile Joined February 2011
Australia729 Posts
Last Edited: 2017-05-27 13:07:52
May 27 2017 12:55 GMT
#466
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-27 13:11:47
May 27 2017 13:10 GMT
#467
It's funny many think we will be outsmarting the AI with gimmicks and unorthodox play when in both Chess and Go it is the AI that showed us the best of both of these.

Google has since managed to take Ke by surprise: “There was a cut that quite shocked me, because it was a move that would never happen in a human-to-human Go match,” he said.



lol:

Earlier this year, Google secretly let the improved AlphaGo play unofficially on an online Go platform. The AI won 50 out of 51 games, and its only loss was owed to an internet connection timeout.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
May 27 2017 15:16 GMT
#468
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.
If you cannot win with 100 apm, win with 100 cpm.
XenoX101
Profile Joined February 2011
Australia729 Posts
May 27 2017 16:04 GMT
#469
On May 28 2017 00:16 LetaBot wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.

As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 27 2017 17:13 GMT
#470
It's interesting that in the turn based games it plays these godlike moves but in a real time game we have efforts to make it seem more human-like. That's good though, then it can actually teach *us* something about the game.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Sholip
Profile Blog Joined March 2014
Hungary422 Posts
Last Edited: 2017-05-27 17:47:55
May 27 2017 17:46 GMT
#471
On May 28 2017 01:04 XenoX101 wrote:
Show nested quote +
On May 28 2017 00:16 LetaBot wrote:
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.

As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.


Well, the AI is trained playing against itself, but I assume it's also tested against various build orders which the AI itself wouldn't necessarily deploy. This should include a bunch of inefficient or unorthodox builds as well, in my opinion. Also, if the AI doesn't perform well enough against suboptimal play, then it may start deploying these very strategies against itself (upon seeing that they are effective) – which would lead to it playing against it quite often, and learning to respond appropriately.
Also, a proper neural network should, while learning, in my opinion, make the generalizations that you or I make when playing the game. So even if you present it with something unexpected which it hasn't played against a lot – which is unlikely in itself, it probably won't "break down" and start doing completely stupid stuff. It will probably do what it learned is best if it doesn't know what it's up against: scout, try to identify enemy tech and react accordingly, while creating workers, units, and probably playing a bit safer. Ultimately, suboptimal play is suboptimal, so it can be countered just by playing safe in most cases.
"A hero is no braver than an ordinary man, but he is brave five minutes longer. Also, Zest is best." – Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
Last Edited: 2017-05-27 17:59:21
May 27 2017 17:55 GMT
#472
On May 28 2017 00:16 LetaBot wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Blizzard already confirmed that the API will allow AIs to play the game as slowly/fast as they want and obviously, unless someone is watching the game, there is no rendering necessary so that's a major part of the workload for every tick of the game that's removed. So now the only limit is computer power which we know google has heaps of.

Btw the API's expected functionalities have been documented here for anyone caring to take a look :
Specs

Update 1

Update 2

From the specs one of the most interesting parts is this : The ability to load a replay and examine the state of the game as it plays.

I'm counting on AIs to point mistakes in my play. Actually I'm actively working on that kind of system


Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
sertas
Profile Joined April 2012
Sweden889 Posts
May 27 2017 18:51 GMT
#473
Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
Last Edited: 2017-05-27 19:23:40
May 27 2017 19:23 GMT
#474
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.
The harder it becomes, the more you should focus on the basics.
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
May 27 2017 19:36 GMT
#475
On May 28 2017 03:51 sertas wrote:
Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too

This in the grand scheme of things, will be actually the least impressive of all if they manage to create a world class starcraft bot.
The harder it becomes, the more you should focus on the basics.
Poopi
Profile Blog Joined November 2010
France12906 Posts
May 27 2017 19:52 GMT
#476
On May 28 2017 04:23 sabas123 wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.
WriterMaru
MockHamill
Profile Joined March 2010
Sweden1798 Posts
May 27 2017 20:24 GMT
#477
In a few years the best Starcraft player in the world will be an AI.

Some years later the smartest person on the planet will not be a person.

This in the end for humanity.
Poopi
Profile Blog Joined November 2010
France12906 Posts
May 27 2017 20:38 GMT
#478
On May 28 2017 05:24 MockHamill wrote:
In a few years the best Starcraft player in the world will be an AI.

Some years later the smartest person on the planet will not be a person.

This in the end for humanity.

No?
Every time there is a bit of improvement in AI, people oversell it but it never fails to disappoint and nobody wants to hear anything about it for the next few years
Don't be fooled by the hype
WriterMaru
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-27 22:51:17
May 27 2017 22:39 GMT
#479
On May 28 2017 04:52 Poopi wrote:
Show nested quote +
On May 28 2017 04:23 sabas123 wrote:
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.



This comment is puzzling. Chess engines were beating the best chess players while being very bad at Go, exactly because of the number of game states/combinations.

A chess engine doesn't naively start to sample possible moves. But it does find the move it makes by calculating a lot of moves, including stupid moves that humans would instinctively reject.

The way you seem to refer to a 'brute force' algorithm is suggesting that an algorithm that is not brute forcing a solution is not significantly affected by the size of the game states. I was not trained as a computer scientists, so to me a brute force algorithm is an algorithm that naively explores part of the possible solutions and relies on huge computational power to get to a meaningful result. But that is not what is technically known as a brute force algorithm. But your comment suggest that, like me, you want to use a wider definition.

This opposed to an algorithm that uses a neural network that has already been trained to quickly come to some solution, using 'instinct'/pattern recognition, rather than partial naive sampling of a huge area of the solution space.

The fact is that a chess engine uses many tricks to limit the number of moves it considers. And that relies on sheer computational power. The fact is that a Monte Carlo uses tricks so it quickly converges on the correct solution, given that the phase space is small enough so that the computational power available can come to a solution in time scales that are acceptable. That to me still makes them brute force methods. But like I said, I realize I am not agreeing with the accepted definition.

But you seem oblivious to this entire debate, while accusing everyone else of being oblivious. Puzzling indeed.



As for the number of game states in an RTS, I wonder if making tiers and course graining them will work. And if that is used, how many significantly distinguishable game states there really are. Because to me, in RTS, many games will follow a similar general pattern. On the macro-level, once the opening has stabilized, there is only so many different game states. You can be ahead in economy, equal, or behind. Same for tech and army size. The fine details about unit positions often will not matter. You can spread the units those players have across the map in many different ways, But those are pointless game states. Atfer a Siege expand in TvP, both the terran and the protoss will have a certain number of units. And protoss will have map control. And terran will have a limited number of spots where the tanks can be, either sieged or unsieged. Given a slight deviation, either player can probably move their units into the optimal position, without any penalty. It is like a TvP siege expand game will always move through the same game state, almost always.

Poopi
Profile Blog Joined November 2010
France12906 Posts
May 27 2017 22:50 GMT
#480
On May 28 2017 07:39 Ernaine wrote:
Show nested quote +
On May 28 2017 04:52 Poopi wrote:
On May 28 2017 04:23 sabas123 wrote:
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.

...

I didn't get what you didn't get from my post.
One user said: "But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them." which seem really really weird because Go actually has a shitton of # positions, yet was still handled. So it seems like this user thinks we "solve" games using magic or by just evaluating every possible move, which is a rather impressive view in 2017.
WriterMaru
Prev 1 22 23 24 25 26 29 Next All
Please log in or register to reply.
Live Events Refresh
OSC
12:00
Season 13 World Championship
SKillous vs NightMareLIVE!
WardiTV1232
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Lowko551
Harstem 233
BRAT_OK 69
JuggernautJason37
MindelVK 21
StarCraft: Brood War
Britney 30853
Calm 2822
Shuttle 1208
EffOrt 726
Stork 471
Larva 468
Soma 319
BeSt 253
firebathero 230
Rush 153
[ Show more ]
hero 152
Hyun 82
Mong 79
Barracks 65
ivOry 33
HiyA 24
Terrorterran 21
Rock 20
910 16
yabsab 12
Bale 12
Sacsri 10
scan(afreeca) 10
Shine 6
Dota 2
syndereN422
420jenkins203
Counter-Strike
fl0m2090
kennyS689
markeloff167
Other Games
Liquid`RaSZi1405
B2W.Neo1384
crisheroes329
Liquid`VortiX211
KnowMe128
QueenE113
Mew2King78
FrodaN65
Organizations
Other Games
gamesdonequick2329
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• StrangeGG 5
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• Michael_bg 6
• HerbMon 2
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• TFBlade1022
Upcoming Events
All-Star Invitational
9h 56m
INnoVation vs soO
Serral vs herO
Cure vs Solar
sOs vs Scarlett
Classic vs Clem
Reynor vs Maru
uThermal 2v2 Circuit
19h 41m
AI Arena Tournament
1d 3h
All-Star Invitational
1d 9h
MMA vs DongRaeGu
Rogue vs Oliveira
Sparkling Tuna Cup
1d 17h
OSC
1d 19h
Replay Cast
2 days
Wardi Open
2 days
Monday Night Weeklies
3 days
The PondCast
4 days
[ Show More ]
Replay Cast
6 days
Liquipedia Results

Completed

Proleague 2026-01-14
Big Gabe Cup #3
NA Kuram Kup

Ongoing

C-Race Season 1
IPSL Winter 2025-26
BSL 21 Non-Korean Championship
CSL 2025 WINTER (S19)
KCM Race Survival 2026 Season 1
OSC Championship Season 13
Underdog Cup #3
BLAST Bounty Winter Qual
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025

Upcoming

Escore Tournament S1: W5
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Rongyi Cup S3
SC2 All-Star Inv. 2025
Nations Cup 2026
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.