• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 19:27
CEST 01:27
KST 08:27
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Serral wins EWC 202538Tournament Spotlight: FEL Cracow 202510Power Rank - Esports World Cup 202580RSL Season 1 - Final Week9[ASL19] Finals Recap: Standing Tall15
Community News
LiuLi Cup - August 2025 Tournaments3[BSL 2025] H2 - Team Wars, Weeklies & SB Ladder10EWC 2025 - Replay Pack4Google Play ASL (Season 20) Announced55BSL Team Wars - Bonyth, Dewalt, Hawk & Sziky teams11
StarCraft 2
General
Serral wins EWC 2025 The GOAT ranking of GOAT rankings Interview with Chris "ChanmanV" Chan Tournament Spotlight: FEL Cracow 2025 Classic: "It's a thick wall to break through to become world champ"
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament LiuLi Cup - August 2025 Tournaments Sea Duckling Open (Global, Bronze-Diamond) TaeJa vs Creator Bo7 SC Evo Showmatch FEL Cracov 2025 (July 27) - $10,000 live event
Strategy
Custom Maps
External Content
Mutation # 485 Death from Below Mutation # 484 Magnetic Pull Mutation #239 Bad Weather Mutation # 483 Kill Bot Wars
Brood War
General
Nobody gona talk about this year crazy qualifiers? [BSL 2025] H2 - Team Wars, Weeklies & SB Ladder How do the new Battle.net ranks translate? BSL Team Wars - Bonyth, Dewalt, Hawk & Sziky teams BW General Discussion
Tourneys
Cosmonarchy Pro Showmatches [ASL20] Online Qualifiers Day 2 [Megathread] Daily Proleagues [ASL20] Online Qualifiers Day 1
Strategy
[G] Mineral Boosting Muta micro map competition Does 1 second matter in StarCraft? Simple Questions, Simple Answers
Other Games
General Games
Stormgate/Frost Giant Megathread Total Annihilation Server - TAForever Nintendo Switch Thread Beyond All Reason [MMORPG] Tree of Savior (Successor of Ragnarok)
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread 9/11 Anniversary Possible Al Qaeda Attack on 9/11 Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread Movie Discussion! Korean Music Discussion
Sports
Formula 1 Discussion 2024 - 2025 Football Thread TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
ASL S20 English Commentary…
namkraft
The Link Between Fitness and…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Socialism Anyone?
GreenHorizons
Customize Sidebar...

Website Feedback

Closed Threads



Active: 626 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 29

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 27 28 29 All
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 09:58 GMT
#561
I thought this was funny, from the paper:
Convolutional networks for reinforcement learning [..] usually reduce spatial resolution of the input with each layer and ultimately finish with a fully connected layer that discards it completely. This allows for spatial information to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (clicking on the screen and minimap). As these spatial actions act within the same space as inputs, it might be detrimental to discard the spatial structure of the input.


I read somewhere that AlphaZero used the last seven moves as input for its network. This might seem odd, since theoretically in Go you only need to know the board position to come up with a correct move. The reason given was that it serves as an "attention mechanism", i.e. if you know the last couple of moves you get some information about what parts of the board are more significant. This is actually a very human way of approaching the game.

In both these examples researchers basically have to guess what information to feed their pet network for it to be able to grow effectively. Since StarCraft is a game where spatial relationships are important, let's assume the network requires input which does not mask this. It's like nurturing an alien organism you know nothing about.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 10:20 GMT
#562
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.

[image loading]

I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 15 2017 10:54 GMT
#563
On December 15 2017 19:20 Grumbels wrote:
Show nested quote +
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.



I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?


Now that you mention it, it is indeed odd. Here are replays of a GM getting 849.7 over 25 games. Maybe the GM in the paper slept through a few instances :D
PlayerofDota
Profile Joined May 2017
29 Posts
December 15 2017 19:38 GMT
#564
It will depend on the apm limitations if they put any at all. I feel like there should be APM limitations on the AI, as it would be unfair to humans, as imagine if we had a brain interface and could control units with our brain power.

But we have to think it in our brain, we have to visualize it with our eyes, we have to move the mouse and click and keyboard, have it register and appear on screen. The AI is directly wired, thus has an inherit advantage.

Games like Chess and Go are very linear and while there might be certain 'intuition' its not that deep actually. Its like the sort of Diablo 3 build combinations, wasn't the number Blizzard gave something like 44 million or some crazy number, but in reality just 50 different enough, while the rest were basically extremely minor modifications of those 50.

So in reality Go does have some "intuition", but the choices are a lot smaller than the ALL possible combinations, in reality a certain positions can only have 3-4 moves.

So mastering a real time 3D strategy game will require a lot more thought power. It has to consistently scout and make adjustments based on the scouting and weigh that with the strategy that its doing or has been doing as a result of the previous scouting.

Then there is the decision when to sack for example an army or base in order to win the larger battle, what units to build at what time and in which position to put them in, when to attack ,retreat, harass, etc...

And again I feel like on order to be a fair competition and not a mechanical auto win, the bot will have to have its APM limited to the average of pro players. Otherwise if it can always pick up a reaver at the last milisecond and perfectly spread marines and medics to never get more than 2 units hit by lurkers and if it can dance with dragoons indefinitely it can never lose.

I feel like the onus has to be on its "thinking" power and if it can outsmart, outstrategize humans.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-17 11:25:02
December 17 2017 11:24 GMT
#565
On December 15 2017 07:21 Excludos wrote:
Show nested quote +
On December 14 2017 23:50 GoloSC2 wrote:
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.

I checked Deepmind’s site, and they scarcely mention SC2 though. For instance, the only recent mention of SC2 on twitter is a short promotion of Blizzard’s AI workshop, where they explain the environment.
twitter

And if you look at the papers presented in their recent conference, most certainly have nothing to do with SC2 research and none of them seem to mention SC2. link

So personally I would not expect any sort of breakthrough in the next year.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
HomoDeus
Profile Joined July 2017
Netherlands12 Posts
December 17 2017 13:54 GMT
#566
It's not a matter of "if", but a matter of "when", an AI can beat a human professional.
ProMeTheus112
Profile Joined December 2009
France2027 Posts
Last Edited: 2017-12-17 14:57:55
December 17 2017 14:19 GMT
#567
they will have a lot of trouble with SC2 unless they can make some unbeatable explosive micro timing attack, with dota 1v1 the game is pretty simple, it can play on the frame accuracy where it has most advantage, but in SC2 the amount of possibilities make it so hard for an AI to make a good game, it can't calculate all of it so it has to run on too much error margin that humans can respond to much better, since humans understand the game instead of calculate lol the AI can only calculate stuff its not intelligent at all its just a calculator program that can run faster or with more memory, it doesn't understand concepts it only calculates. You have to implement yourself the concept that you want your CPU to calculate, so if you play depending on the calculation method of the CPU it will be messed up by your understanding of what it is doing vs what it doesn't know that you are doing in this particular game. Let's say that someone was able to fully mathematically map SC2 or that the AI does it itself with their methods, there will likely be flaws in the mathematical map due to very high complexity compared to dota 1v1 (talking like millions times more complicated lol), and then to handle this enormous data during gameplay would probably require hardware that nobody has built yet. Maybe I'm wrong and SC2 can be reduced to some kind of baneling or adept all-in with perfect micro that is unblockable lol, but I don't believe we're gonna see a AI consistently beat the best human players in the more complex RTS games before a long time, there is too much show-off talk from owners of AI patents. They still can't make AI that handles language properly, and that requires I think a lot less data than mapping starcraft..
in short I think AIs so far developped they may give appearance of being somewhat evolved but I haven't seen anything actually impressive in terms of something other than like being very fast or very accurate. you know like the most impressive I've seen is those bots that can jump obstacles and stabilize themselves on 2 legs or 4, but they're still so shaky about it right? sure they don't have all the hundreds of different muscles like animals, but animals seem damn smarter than robots lol
because they are. computers are stupid, completely stupid, only fast and accurate, there is no intelligence there only calculus, it's not brain its just circuitry responding to data code and instructions, very limited in the range of things it can do compared to a brain, it's only fast at that particular calculation that it's told to do, that's all. You can make that calculation complex, it is still limited to that, it cannot apprehend things differently and manipulate concepts, just run the stupid calculation lol

my math teacher used to say that in first computer science related lesson, computers are STUPID she stressed that hahaha she was right
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2018-02-07 20:05:43
February 07 2018 20:00 GMT
#568
Seems they are making progress: https://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/

In our most recent work, we explore the challenge of training a single agent on many tasks.

Today we are releasing DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space. Training an agent to perform well on many tasks requires massive throughput and making efficient use of every data point. To this end, we have developed a new, highly scalable agent architecture for distributed training called IMPALA (Importances Weighted Actor-Learner Architectures) that uses a new off-policy correction algorithm called V-trace.

...

Thanks to the optimised model of IMPALA, it can process one-to-two orders of magnitude more experience compared to similar agents, making learning in challenging environments possible. We have compared IMPALA with several popular actor-critic methods and have seen significant speed-ups. Additionally, the throughput using IMPALA scales almost linearly with increasing number of actors and learners which shows that both the distributed agent model and the V-trace algorithm can handle very large scale experiments, even on the order of thousands of machines.

When it was tested on the DMLab-30 levels, IMPALA was 10 times more data efficient and achieved double the final score compared to distributed A3C. Moreover, IMPALA showed positive transfer from training in multi-task settings compared to training in single-task setting.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
lestye
Profile Blog Joined August 2010
United States4163 Posts
February 09 2018 01:57 GMT
#569
I was thinking about this the other day, and I think like, (keep in mind I've read next to little about how the underlying AI actually works, im just guessing), one of the things a perfect AI would do, is would to simulate the income/resources an opponent would ideally have and have already spent.

The AI would then be able to calculate a sum total of the resources a player has spent, and do a risk assessment based on those contrasting value. For instance, let's say the player is on 2 base, and has probably generated 8k minerals 2k gas or something (sry if those numbers are nonsensical, just throwing them out), and it sees that a drop it committed before was around 1k minerals 300 gas, and it scans and sees that 6k of those minerals are at the player's natural, it could then use that information to logically conclude where the player is most vulnerable, taking into account how many resources it might have defending its main.

Also, obviously if it detects that the player has spent 1 mineral more than the projected amount, it knows immediately there's an expansion that the AI doesn't know about.
"You guys are just edgelords. Embrace your inner weeb desu" -Zergneedsfood
Prev 1 27 28 29 All
Please log in or register to reply.
Live Events Refresh
Next event in 11h 34m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
CosmosSc2 158
SpeCial 120
Codebar 5
StarCraft: Brood War
Artosis 634
Larva 177
firebathero 157
ggaemo 101
HiyA 45
Jaeyun 43
Aegong 31
NaDa 31
Dota 2
monkeys_forever374
capcasts247
NeuroSwarm96
League of Legends
JimRising 521
Super Smash Bros
AZ_Axe188
Heroes of the Storm
Khaldor238
Other Games
tarik_tv21638
summit1g12228
gofns10659
Grubby3076
fl0m704
Maynarde112
ROOTCatZ95
ViBE62
JuggernautJason32
Organizations
Other Games
gamesdonequick1821
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• davetesta169
• Hupsaiya 63
• RyuSc2 60
• Sammyuel 10
• AfreecaTV YouTube
• sooper7s
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
StarCraft: Brood War
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Doublelift5294
Other Games
• imaqtpie1317
• Shiphtur184
Upcoming Events
Wardi Open
11h 34m
OSC
1d
Stormgate Nexus
2 days
uThermal 2v2 Circuit
2 days
The PondCast
3 days
Replay Cast
4 days
LiuLi Cup
4 days
uThermal 2v2 Circuit
4 days
RSL Revival
5 days
RSL Revival
5 days
[ Show More ]
uThermal 2v2 Circuit
5 days
Sparkling Tuna Cup
6 days
uThermal 2v2 Circuit
6 days
Liquipedia Results

Completed

ASL Season 20: Qualifier #1
FEL Cracow 2025
CC Div. A S7

Ongoing

Copa Latinoamericana 4
Jiahua Invitational
BSL 20 Non-Korean Championship
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20: Qualifier #2
HCC Europe
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025

Upcoming

ASL Season 20
CSLPRO Chat StarLAN 3
BSL Season 21
RSL Revival: Season 2
Maestros of the Game
SEL Season 2 Championship
WardiTV Summer 2025
uThermal 2v2 Main Event
MESA Nomadic Masters Fall
Thunderpick World Champ.
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.