• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 09:57
CET 15:57
KST 23:57
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
[BSL21] Non-Korean Championship - Starts Jan 100SC2 All-Star Invitational: Jan 17-1819Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises2Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies3ComeBackTV's documentary on Byun's Career !11
StarCraft 2
General
SC2 All-Star Invitational: Jan 17-18 Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises Chinese SC2 server to reopen; live all-star event in Hangzhou Starcraft 2 Zerg Coach ComeBackTV's documentary on Byun's Career !
Tourneys
OSC Season 13 World Championship WardiTV Mondays $5,000+ WardiTV 2025 Championship $100 Prize Pool - Winter Warp Gate Masters Showdow Sparkling Tuna Cup - Weekly Open Tournament
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 506 Warp Zone Mutation # 505 Rise From Ashes Mutation # 504 Retribution Mutation # 503 Fowl Play
Brood War
General
StarCraft & BroodWar Campaign Speedrun Quest A cwal.gg Extension - Easily keep track of anyone I would like to say something about StarCraft BGH Auto Balance -> http://bghmmr.eu/ (UMS) SWITCHEROO *New* /Destination Edit/
Tourneys
SLON Grand Finals – Season 2 [BSL21] Non-Korean Championship - Starts Jan 10 [Megathread] Daily Proleagues [BSL21] Grand Finals - Sunday 21:00 CET
Strategy
Simple Questions, Simple Answers Current Meta [G] How to get started on ladder as a new Z player Fighting Spirit mining rates
Other Games
General Games
General RTS Discussion Thread Nintendo Switch Thread Awesome Games Done Quick 2026! Stormgate/Frost Giant Megathread Mechabellum
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Canadian Politics Mega-thread The Games Industry And ATVI 12 Days of Starcraft
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List TL+ Announced
Blogs
National Diversity: A Challe…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1698 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 29

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 27 28 29 All
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 09:58 GMT
#561
I thought this was funny, from the paper:
Convolutional networks for reinforcement learning [..] usually reduce spatial resolution of the input with each layer and ultimately finish with a fully connected layer that discards it completely. This allows for spatial information to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (clicking on the screen and minimap). As these spatial actions act within the same space as inputs, it might be detrimental to discard the spatial structure of the input.


I read somewhere that AlphaZero used the last seven moves as input for its network. This might seem odd, since theoretically in Go you only need to know the board position to come up with a correct move. The reason given was that it serves as an "attention mechanism", i.e. if you know the last couple of moves you get some information about what parts of the board are more significant. This is actually a very human way of approaching the game.

In both these examples researchers basically have to guess what information to feed their pet network for it to be able to grow effectively. Since StarCraft is a game where spatial relationships are important, let's assume the network requires input which does not mask this. It's like nurturing an alien organism you know nothing about.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 10:20 GMT
#562
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.

[image loading]

I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 15 2017 10:54 GMT
#563
On December 15 2017 19:20 Grumbels wrote:
Show nested quote +
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.



I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?


Now that you mention it, it is indeed odd. Here are replays of a GM getting 849.7 over 25 games. Maybe the GM in the paper slept through a few instances :D
PlayerofDota
Profile Joined May 2017
29 Posts
December 15 2017 19:38 GMT
#564
It will depend on the apm limitations if they put any at all. I feel like there should be APM limitations on the AI, as it would be unfair to humans, as imagine if we had a brain interface and could control units with our brain power.

But we have to think it in our brain, we have to visualize it with our eyes, we have to move the mouse and click and keyboard, have it register and appear on screen. The AI is directly wired, thus has an inherit advantage.

Games like Chess and Go are very linear and while there might be certain 'intuition' its not that deep actually. Its like the sort of Diablo 3 build combinations, wasn't the number Blizzard gave something like 44 million or some crazy number, but in reality just 50 different enough, while the rest were basically extremely minor modifications of those 50.

So in reality Go does have some "intuition", but the choices are a lot smaller than the ALL possible combinations, in reality a certain positions can only have 3-4 moves.

So mastering a real time 3D strategy game will require a lot more thought power. It has to consistently scout and make adjustments based on the scouting and weigh that with the strategy that its doing or has been doing as a result of the previous scouting.

Then there is the decision when to sack for example an army or base in order to win the larger battle, what units to build at what time and in which position to put them in, when to attack ,retreat, harass, etc...

And again I feel like on order to be a fair competition and not a mechanical auto win, the bot will have to have its APM limited to the average of pro players. Otherwise if it can always pick up a reaver at the last milisecond and perfectly spread marines and medics to never get more than 2 units hit by lurkers and if it can dance with dragoons indefinitely it can never lose.

I feel like the onus has to be on its "thinking" power and if it can outsmart, outstrategize humans.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-17 11:25:02
December 17 2017 11:24 GMT
#565
On December 15 2017 07:21 Excludos wrote:
Show nested quote +
On December 14 2017 23:50 GoloSC2 wrote:
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.

I checked Deepmind’s site, and they scarcely mention SC2 though. For instance, the only recent mention of SC2 on twitter is a short promotion of Blizzard’s AI workshop, where they explain the environment.
twitter

And if you look at the papers presented in their recent conference, most certainly have nothing to do with SC2 research and none of them seem to mention SC2. link

So personally I would not expect any sort of breakthrough in the next year.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
HomoDeus
Profile Joined July 2017
Netherlands12 Posts
December 17 2017 13:54 GMT
#566
It's not a matter of "if", but a matter of "when", an AI can beat a human professional.
ProMeTheus112
Profile Joined December 2009
France2027 Posts
Last Edited: 2017-12-17 14:57:55
December 17 2017 14:19 GMT
#567
they will have a lot of trouble with SC2 unless they can make some unbeatable explosive micro timing attack, with dota 1v1 the game is pretty simple, it can play on the frame accuracy where it has most advantage, but in SC2 the amount of possibilities make it so hard for an AI to make a good game, it can't calculate all of it so it has to run on too much error margin that humans can respond to much better, since humans understand the game instead of calculate lol the AI can only calculate stuff its not intelligent at all its just a calculator program that can run faster or with more memory, it doesn't understand concepts it only calculates. You have to implement yourself the concept that you want your CPU to calculate, so if you play depending on the calculation method of the CPU it will be messed up by your understanding of what it is doing vs what it doesn't know that you are doing in this particular game. Let's say that someone was able to fully mathematically map SC2 or that the AI does it itself with their methods, there will likely be flaws in the mathematical map due to very high complexity compared to dota 1v1 (talking like millions times more complicated lol), and then to handle this enormous data during gameplay would probably require hardware that nobody has built yet. Maybe I'm wrong and SC2 can be reduced to some kind of baneling or adept all-in with perfect micro that is unblockable lol, but I don't believe we're gonna see a AI consistently beat the best human players in the more complex RTS games before a long time, there is too much show-off talk from owners of AI patents. They still can't make AI that handles language properly, and that requires I think a lot less data than mapping starcraft..
in short I think AIs so far developped they may give appearance of being somewhat evolved but I haven't seen anything actually impressive in terms of something other than like being very fast or very accurate. you know like the most impressive I've seen is those bots that can jump obstacles and stabilize themselves on 2 legs or 4, but they're still so shaky about it right? sure they don't have all the hundreds of different muscles like animals, but animals seem damn smarter than robots lol
because they are. computers are stupid, completely stupid, only fast and accurate, there is no intelligence there only calculus, it's not brain its just circuitry responding to data code and instructions, very limited in the range of things it can do compared to a brain, it's only fast at that particular calculation that it's told to do, that's all. You can make that calculation complex, it is still limited to that, it cannot apprehend things differently and manipulate concepts, just run the stupid calculation lol

my math teacher used to say that in first computer science related lesson, computers are STUPID she stressed that hahaha she was right
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2018-02-07 20:05:43
February 07 2018 20:00 GMT
#568
Seems they are making progress: https://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/

In our most recent work, we explore the challenge of training a single agent on many tasks.

Today we are releasing DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space. Training an agent to perform well on many tasks requires massive throughput and making efficient use of every data point. To this end, we have developed a new, highly scalable agent architecture for distributed training called IMPALA (Importances Weighted Actor-Learner Architectures) that uses a new off-policy correction algorithm called V-trace.

...

Thanks to the optimised model of IMPALA, it can process one-to-two orders of magnitude more experience compared to similar agents, making learning in challenging environments possible. We have compared IMPALA with several popular actor-critic methods and have seen significant speed-ups. Additionally, the throughput using IMPALA scales almost linearly with increasing number of actors and learners which shows that both the distributed agent model and the V-trace algorithm can handle very large scale experiments, even on the order of thousands of machines.

When it was tested on the DMLab-30 levels, IMPALA was 10 times more data efficient and achieved double the final score compared to distributed A3C. Moreover, IMPALA showed positive transfer from training in multi-task settings compared to training in single-task setting.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
lestye
Profile Blog Joined August 2010
United States4186 Posts
February 09 2018 01:57 GMT
#569
I was thinking about this the other day, and I think like, (keep in mind I've read next to little about how the underlying AI actually works, im just guessing), one of the things a perfect AI would do, is would to simulate the income/resources an opponent would ideally have and have already spent.

The AI would then be able to calculate a sum total of the resources a player has spent, and do a risk assessment based on those contrasting value. For instance, let's say the player is on 2 base, and has probably generated 8k minerals 2k gas or something (sry if those numbers are nonsensical, just throwing them out), and it sees that a drop it committed before was around 1k minerals 300 gas, and it scans and sees that 6k of those minerals are at the player's natural, it could then use that information to logically conclude where the player is most vulnerable, taking into account how many resources it might have defending its main.

Also, obviously if it detects that the player has spent 1 mineral more than the projected amount, it knows immediately there's an expansion that the AI doesn't know about.
"You guys are just edgelords. Embrace your inner weeb desu" -Zergneedsfood
Prev 1 27 28 29 All
Please log in or register to reply.
Live Events Refresh
Platinum Heroes Events
13:00
PSC2L Finals - Playoffs
NightMare vs JumyLIVE!
Krystianer vs Creator
Shameless vs Gerald
RotterdaM841
Liquipedia
OSC
13:00
World Championship: Challenger
WardiTV708
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 841
Lowko442
BRAT_OK 56
MindelVK 15
RushiSC 11
StarCraft: Brood War
Horang2 1921
Jaedong 1355
EffOrt 1259
Mini 706
Shuttle 615
Stork 545
Larva 544
Soma 539
ggaemo 445
Hyuk 417
[ Show more ]
actioN 372
Light 325
Hyun 272
ZerO 270
Barracks 223
Snow 219
firebathero 192
Rush 154
Sharp 150
hero 73
JYJ 54
Killer 43
Sea.KH 41
Mong 34
Terrorterran 31
Rock 24
yabsab 23
soO 19
HiyA 16
zelot 16
ajuk12(nOOB) 14
scan(afreeca) 14
Shine 13
GoRush 11
Sacsri 11
Dota 2
qojqva4494
singsing2190
League of Legends
C9.Mang0497
Counter-Strike
zeus237
Other Games
Grubby3738
Gorgc2162
B2W.Neo870
hiko528
Pyrionflax373
Fuzer 265
Hui .251
ArmadaUGS64
QueenE44
ZerO(Twitch)27
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• Adnapsc2 19
• HeavenSC 12
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV557
• lizZardDota229
League of Legends
• Jankos3006
Upcoming Events
Korean StarCraft League
12h 3m
OSC
21h 3m
IPSL
23h 3m
Dewalt vs Bonyth
OSC
1d 3h
OSC
1d 21h
uThermal 2v2 Circuit
1d 23h
Replay Cast
2 days
Patches Events
3 days
OSC
3 days
OSC
4 days
[ Show More ]
OSC
5 days
Liquipedia Results

Completed

C-Race Season 1
WardiTV 2025
META Madness #9

Ongoing

IPSL Winter 2025-26
BSL Season 21
Slon Tour Season 2
CSL Season 19: Qualifier 2
Escore Tournament S1: W2
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025

Upcoming

CSL 2025 WINTER (S19)
Escore Tournament S1: W3
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Thunderfire SC2 All-star 2025
Big Gabe Cup #3
OSC Championship Season 13
Nations Cup 2026
Underdog Cup #3
NA Kuram Kup
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.