• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 10:16
CEST 16:16
KST 23:16
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S RO12 Preview: GuMiho, Bunny, SHIN, ByuN3The Memories We Share - Facing the Final(?) GSL24Code S RO12 Preview: Cure, Zoun, Solar, Creator4[ASL19] Finals Preview: Daunting Task30[ASL19] Ro4 Recap : The Peak15
Community News
Code S RO12 Results + RO8 Groups (2025 Season 2)1Weekly Cups (May 19-25): Hindsight is 20/20?0DreamHack Dallas 2025 - Official Replay Pack8[BSL20] RO20 Group Stage2EWC 2025 Regional Qualifiers (May 28-June 1)24
StarCraft 2
General
Code S RO12 Results + RO8 Groups (2025 Season 2) CN community: Firefly accused of suspicious activities The Memories We Share - Facing the Final(?) GSL Karma, Domino Effect, and how it relates to SC2. How does the number of casters affect your enjoyment of esports?
Tourneys
EWC 2025 Regional Qualifiers (May 28-June 1) DreamHack Dallas 2025 Last Chance Qualifiers for OlimoLeague 2024 Winter [GSL 2025] Code S:Season 2 - RO12 - Group B [GSL 2025] Code S:Season 2 - RO12 - Group A
Strategy
Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 475 Hard Target Mutation # 474 Futile Resistance Mutation # 473 Cold is the Void Mutation # 472 Dead Heat
Brood War
General
Battle.net is not working Will foreigners ever be able to challenge Koreans? BW General Discussion Which player typ excels at which race or match up? Practice Partners (Official)
Tourneys
[ASL19] Grand Finals [BSL 2v2] ProLeague Season 3 - Friday 21:00 CET [BSL20] RO20 Group D - Sunday 20:00 CET [BSL20] RO20 Group B - Saturday 20:00 CET
Strategy
[G] How to get started on ladder as a new Z player I am doing this better than progamers do.
Other Games
General Games
Monster Hunter Wilds Path of Exile Nintendo Switch Thread Beyond All Reason Battle Aces/David Kim RTS Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
LiquidLegends to reintegrate into TL.net
Heroes of the Storm
Simple Questions, Simple Answers
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia TL Mafia Community Thread TL Mafia Plays: Diplomacy TL Mafia: Generative Agents Showdown Survivor II: The Amazon
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread All you football fans (soccer)! European Politico-economics QA Mega-thread
Fan Clubs
Serral Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion!
Sports
Formula 1 Discussion 2024 - 2025 Football Thread NHL Playoffs 2024 NBA General Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread Cleaning My Mechanical Keyboard How to clean a TTe Thermaltake keyboard?
TL Community
The Automated Ban List TL.net Ten Commandments
Blogs
I was completely wrong ab…
jameswatts
Need Your Help/Advice
Glider
Trip to the Zoo
micronesia
Yes Sir! How Commanding Impr…
TrAiDoS
Poker
Nebuchad
Info SLEgma_12
SLEgma_12
SECOND COMMING
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 15329 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 29

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 27 28 29 All
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 09:58 GMT
#561
I thought this was funny, from the paper:
Convolutional networks for reinforcement learning [..] usually reduce spatial resolution of the input with each layer and ultimately finish with a fully connected layer that discards it completely. This allows for spatial information to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (clicking on the screen and minimap). As these spatial actions act within the same space as inputs, it might be detrimental to discard the spatial structure of the input.


I read somewhere that AlphaZero used the last seven moves as input for its network. This might seem odd, since theoretically in Go you only need to know the board position to come up with a correct move. The reason given was that it serves as an "attention mechanism", i.e. if you know the last couple of moves you get some information about what parts of the board are more significant. This is actually a very human way of approaching the game.

In both these examples researchers basically have to guess what information to feed their pet network for it to be able to grow effectively. Since StarCraft is a game where spatial relationships are important, let's assume the network requires input which does not mask this. It's like nurturing an alien organism you know nothing about.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 10:20 GMT
#562
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.

[image loading]

I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 15 2017 10:54 GMT
#563
On December 15 2017 19:20 Grumbels wrote:
Show nested quote +
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.



I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?


Now that you mention it, it is indeed odd. Here are replays of a GM getting 849.7 over 25 games. Maybe the GM in the paper slept through a few instances :D
PlayerofDota
Profile Joined May 2017
29 Posts
December 15 2017 19:38 GMT
#564
It will depend on the apm limitations if they put any at all. I feel like there should be APM limitations on the AI, as it would be unfair to humans, as imagine if we had a brain interface and could control units with our brain power.

But we have to think it in our brain, we have to visualize it with our eyes, we have to move the mouse and click and keyboard, have it register and appear on screen. The AI is directly wired, thus has an inherit advantage.

Games like Chess and Go are very linear and while there might be certain 'intuition' its not that deep actually. Its like the sort of Diablo 3 build combinations, wasn't the number Blizzard gave something like 44 million or some crazy number, but in reality just 50 different enough, while the rest were basically extremely minor modifications of those 50.

So in reality Go does have some "intuition", but the choices are a lot smaller than the ALL possible combinations, in reality a certain positions can only have 3-4 moves.

So mastering a real time 3D strategy game will require a lot more thought power. It has to consistently scout and make adjustments based on the scouting and weigh that with the strategy that its doing or has been doing as a result of the previous scouting.

Then there is the decision when to sack for example an army or base in order to win the larger battle, what units to build at what time and in which position to put them in, when to attack ,retreat, harass, etc...

And again I feel like on order to be a fair competition and not a mechanical auto win, the bot will have to have its APM limited to the average of pro players. Otherwise if it can always pick up a reaver at the last milisecond and perfectly spread marines and medics to never get more than 2 units hit by lurkers and if it can dance with dragoons indefinitely it can never lose.

I feel like the onus has to be on its "thinking" power and if it can outsmart, outstrategize humans.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-17 11:25:02
December 17 2017 11:24 GMT
#565
On December 15 2017 07:21 Excludos wrote:
Show nested quote +
On December 14 2017 23:50 GoloSC2 wrote:
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.

I checked Deepmind’s site, and they scarcely mention SC2 though. For instance, the only recent mention of SC2 on twitter is a short promotion of Blizzard’s AI workshop, where they explain the environment.
twitter

And if you look at the papers presented in their recent conference, most certainly have nothing to do with SC2 research and none of them seem to mention SC2. link

So personally I would not expect any sort of breakthrough in the next year.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
HomoDeus
Profile Joined July 2017
Netherlands12 Posts
December 17 2017 13:54 GMT
#566
It's not a matter of "if", but a matter of "when", an AI can beat a human professional.
ProMeTheus112
Profile Joined December 2009
France2027 Posts
Last Edited: 2017-12-17 14:57:55
December 17 2017 14:19 GMT
#567
they will have a lot of trouble with SC2 unless they can make some unbeatable explosive micro timing attack, with dota 1v1 the game is pretty simple, it can play on the frame accuracy where it has most advantage, but in SC2 the amount of possibilities make it so hard for an AI to make a good game, it can't calculate all of it so it has to run on too much error margin that humans can respond to much better, since humans understand the game instead of calculate lol the AI can only calculate stuff its not intelligent at all its just a calculator program that can run faster or with more memory, it doesn't understand concepts it only calculates. You have to implement yourself the concept that you want your CPU to calculate, so if you play depending on the calculation method of the CPU it will be messed up by your understanding of what it is doing vs what it doesn't know that you are doing in this particular game. Let's say that someone was able to fully mathematically map SC2 or that the AI does it itself with their methods, there will likely be flaws in the mathematical map due to very high complexity compared to dota 1v1 (talking like millions times more complicated lol), and then to handle this enormous data during gameplay would probably require hardware that nobody has built yet. Maybe I'm wrong and SC2 can be reduced to some kind of baneling or adept all-in with perfect micro that is unblockable lol, but I don't believe we're gonna see a AI consistently beat the best human players in the more complex RTS games before a long time, there is too much show-off talk from owners of AI patents. They still can't make AI that handles language properly, and that requires I think a lot less data than mapping starcraft..
in short I think AIs so far developped they may give appearance of being somewhat evolved but I haven't seen anything actually impressive in terms of something other than like being very fast or very accurate. you know like the most impressive I've seen is those bots that can jump obstacles and stabilize themselves on 2 legs or 4, but they're still so shaky about it right? sure they don't have all the hundreds of different muscles like animals, but animals seem damn smarter than robots lol
because they are. computers are stupid, completely stupid, only fast and accurate, there is no intelligence there only calculus, it's not brain its just circuitry responding to data code and instructions, very limited in the range of things it can do compared to a brain, it's only fast at that particular calculation that it's told to do, that's all. You can make that calculation complex, it is still limited to that, it cannot apprehend things differently and manipulate concepts, just run the stupid calculation lol

my math teacher used to say that in first computer science related lesson, computers are STUPID she stressed that hahaha she was right
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2018-02-07 20:05:43
February 07 2018 20:00 GMT
#568
Seems they are making progress: https://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/

In our most recent work, we explore the challenge of training a single agent on many tasks.

Today we are releasing DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space. Training an agent to perform well on many tasks requires massive throughput and making efficient use of every data point. To this end, we have developed a new, highly scalable agent architecture for distributed training called IMPALA (Importances Weighted Actor-Learner Architectures) that uses a new off-policy correction algorithm called V-trace.

...

Thanks to the optimised model of IMPALA, it can process one-to-two orders of magnitude more experience compared to similar agents, making learning in challenging environments possible. We have compared IMPALA with several popular actor-critic methods and have seen significant speed-ups. Additionally, the throughput using IMPALA scales almost linearly with increasing number of actors and learners which shows that both the distributed agent model and the V-trace algorithm can handle very large scale experiments, even on the order of thousands of machines.

When it was tested on the DMLab-30 levels, IMPALA was 10 times more data efficient and achieved double the final score compared to distributed A3C. Moreover, IMPALA showed positive transfer from training in multi-task settings compared to training in single-task setting.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
lestye
Profile Blog Joined August 2010
United States4149 Posts
February 09 2018 01:57 GMT
#569
I was thinking about this the other day, and I think like, (keep in mind I've read next to little about how the underlying AI actually works, im just guessing), one of the things a perfect AI would do, is would to simulate the income/resources an opponent would ideally have and have already spent.

The AI would then be able to calculate a sum total of the resources a player has spent, and do a risk assessment based on those contrasting value. For instance, let's say the player is on 2 base, and has probably generated 8k minerals 2k gas or something (sry if those numbers are nonsensical, just throwing them out), and it sees that a drop it committed before was around 1k minerals 300 gas, and it scans and sees that 6k of those minerals are at the player's natural, it could then use that information to logically conclude where the player is most vulnerable, taking into account how many resources it might have defending its main.

Also, obviously if it detects that the player has spent 1 mineral more than the projected amount, it knows immediately there's an expansion that the AI doesn't know about.
"You guys are just edgelords. Embrace your inner weeb desu" -Zergneedsfood
Prev 1 27 28 29 All
Please log in or register to reply.
Live Events Refresh
Next event in 1h 44m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Harstem 447
Hui .275
Dewaltoss 86
StarCraft: Brood War
Britney 47056
Calm 8844
Rain 3475
firebathero 1927
actioN 1325
EffOrt 807
Mini 503
Last 226
ggaemo 175
hero 142
[ Show more ]
sSak 94
Nal_rA 59
Barracks 49
Hyun 45
Killer 45
Sacsri 37
ToSsGirL 33
soO 33
Movie 28
Shinee 26
Backho 23
GoRush 22
Noble 16
zelot 16
Rock 12
IntoTheRainbow 11
SilentControl 6
Bale 4
Terrorterran 1
Dota 2
Gorgc7166
Dendi2342
qojqva1093
XcaliburYe334
LuMiX1
League of Legends
JimRising 442
Heroes of the Storm
Khaldor381
Other Games
singsing2848
B2W.Neo2052
DeMusliM560
Fuzer 318
XaKoH 183
KnowMe74
Has10
Organizations
StarCraft: Brood War
Kim Chul Min (afreeca) 6
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• 3DClanTV 95
• Adnapsc2 4
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV547
League of Legends
• Jankos4457
Upcoming Events
Road to EWC
1h 44m
BSL Season 20
3h 44m
Sziky vs Razz
Sziky vs StRyKeR
Sziky vs DragOn
Sziky vs Tech
Razz vs StRyKeR
Razz vs DragOn
Razz vs Tech
DragOn vs Tech
Online Event
13h 44m
Clem vs ShoWTimE
herO vs MaxPax
Road to EWC
18h 44m
Road to EWC
1d 1h
BSL Season 20
1d 3h
Bonyth vs Doodle
Bonyth vs izu
Bonyth vs MadiNho
Bonyth vs TerrOr
MadiNho vs TerrOr
Doodle vs izu
Doodle vs MadiNho
Doodle vs TerrOr
Replay Cast
2 days
Replay Cast
2 days
Bellum Gens Elite
3 days
The PondCast
4 days
[ Show More ]
Bellum Gens Elite
4 days
Replay Cast
5 days
Bellum Gens Elite
5 days
Replay Cast
6 days
CranKy Ducklings
6 days
SC Evo League
6 days
Bellum Gens Elite
6 days
Liquipedia Results

Completed

Proleague 2025-05-28
DreamHack Dallas 2025
Calamity Stars S2

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
CSL Season 17: Qualifier 1
2025 GSL S2
Heroes 10 EU
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
ECL Season 49: Europe
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025
BLAST Open Spring 2025

Upcoming

CSL Season 17: Qualifier 2
CSL 17: 2025 SUMMER
Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLAN 2025
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
Bellum Gens Elite Stara Zagora 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.