• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 23:19
CEST 05:19
KST 12:19
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 1 - Final Week6[ASL19] Finals Recap: Standing Tall15HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0
Community News
Esports World Cup 2025 - Brackets Revealed19Weekly Cups (July 7-13): Classic continues to roll8Team TLMC #5 - Submission re-extension4Firefly given lifetime ban by ESIC following match-fixing investigation17$25,000 Streamerzone StarCraft Pro Series announced7
StarCraft 2
General
Weekly Cups (July 14-20): Final Check-up Team TLMC #5 - Submission re-extension Who will win EWC 2025? Geoff 'iNcontroL' Robinson has passed away Program: SC2 / XSplit / OBS Scene Switcher
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament Sea Duckling Open (Global, Bronze-Diamond) FEL Cracov 2025 (July 27) - $8000 live event RSL: Revival, a new crowdfunded tournament series $5,100+ SEL Season 2 Championship (SC: Evo)
Strategy
How did i lose this ZvP, whats the proper response
Custom Maps
External Content
Mutation # 483 Kill Bot Wars Mutation # 482 Wheel of Misfortune Mutation # 481 Fear and Lava Mutation # 480 Moths to the Flame
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ Corsair Pursuit Micro? BW General Discussion Pro gamer house photos Flash Announces (and Retracts) Hiatus From ASL
Tourneys
BWCL Season 63 Announcement CSL Xiamen International Invitational [Megathread] Daily Proleagues 2025 ACS Season 2 Qualifier
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
[MMORPG] Tree of Savior (Successor of Ragnarok) Stormgate/Frost Giant Megathread Path of Exile Nintendo Switch Thread CCLP - Command & Conquer League Project
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread The Games Industry And ATVI Stop Killing Games - European Citizens Initiative
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
Anime Discussion Thread Movie Discussion! [Manga] One Piece Korean Music Discussion [\m/] Heavy Metal Thread
Sports
2024 - 2025 Football Thread Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 NBA General Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Ping To Win? Pings And Their…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 630 users

AlphaStar released: Deepmind Research on Ladder

Forum Index > SC2 General
214 CommentsPost a Reply
Normal
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2019-07-10 18:20:33
July 10 2019 17:47 GMT
#1
[image loading]
Image taken from Starcraft2.com


AlphaStar is ready to steal your points on the EU server!

Starting today, players will be able to opt in under the versus tab and battle the AI on the 1v1 ladder (if you are lucky). Multiple agents will be playing anonymously to test their skills against the human race.

The AI is now able to play all matchups and can play the current patch. Matches against Alphastar will affect your MMR. You can alter your opt-in selection at any point!

Get the full info and read the FAQ here.
Facebook Twitter Reddit
Maru and Serral are probably top 5.
MockHamill
Profile Joined March 2010
Sweden1798 Posts
Last Edited: 2019-07-10 17:57:28
July 10 2019 17:56 GMT
#2
This is cool beyond belief. I wonder if it only will learn the gameplay part or if it will learn ladder BM part as well.

Imagine an AI that not only will manner mule you but tells you to go fuck yourself as well.
Rantech
Profile Joined April 2010
Chile527 Posts
July 10 2019 17:59 GMT
#3
Great news, i hope it's not the older version that did beat mana, that could see every spot on the map where it had visión at all times, and could even spot the blur from invisible units the same way.
Musicus
Profile Joined August 2011
Germany23576 Posts
July 10 2019 18:02 GMT
#4
On July 11 2019 02:59 Rantech wrote:
Great news, i hope it's not the older version that did beat mana, that could see every spot on the map where it had visión at all times, and could even spot the blur from invisible units the same way.


Nope, they adjusted it and it seems more fair now.

Q. How does AlphaStar perceive the game?

A. Like human players, AlphaStar perceives the game using a camera-like view. This means that AlphaStar doesn’t receive information about its opponent unless it is within the camera’s field of view, and it can only move units to locations within its view. All limits on AlphaStar’s performance were designed in consultation with pro players.
Maru and Serral are probably top 5.
ImmortalGhost
Profile Joined April 2019
United States57 Posts
July 10 2019 18:03 GMT
#5
What mmr range can the alphastar anonymous players be found? I doubt that researchers care about wins versus beginner players.
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2019-07-10 18:07:17
July 10 2019 18:06 GMT
#6
On July 11 2019 03:03 ImmortalGhost wrote:
What mmr range can the alphastar anonymous players be found? I doubt that researchers care about wins versus beginner players.

I guess they will rank up naturally, starting from placement matches.

But officialy there is no information about this afaik.
Maru and Serral are probably top 5.
DrunkenSCV
Profile Joined November 2016
76 Posts
July 10 2019 18:13 GMT
#7
So the war against the machines has begun...
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-07-10 18:18:16
July 10 2019 18:15 GMT
#8
Great to hear that AlphaStar can play all 9 matchups now, it sounds like the team is making some nice progress. Hopefully there will be a big showcase or showmatch at Blizzcon.
Denominator of the Universe
TL+ Member
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
July 10 2019 18:21 GMT
#9
Sarah Connor needs to tweet this

SkyNet is coming!
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
RandomPlayer
Profile Joined April 2012
Russian Federation390 Posts
July 10 2019 18:27 GMT
#10
wow this is awesome!! I'd love to see some of the games
fLyiNgDroNe
Profile Joined September 2005
Belgium4001 Posts
July 10 2019 18:37 GMT
#11
i wonder how they adjust AlphaStars performance when matching it vs a bronze and vs a GM
Drone is a way of living
slimbo1
Profile Joined May 2011
Germany228 Posts
July 10 2019 18:49 GMT
#12
i dont like this and choosed the opt-out option. I like to play versus other humans
sugarmuffinpuff
Profile Joined October 2014
Canada38 Posts
July 10 2019 18:52 GMT
#13
I only play opponents whose mothers I can insult when I lose.
WombaT
Profile Blog Joined May 2010
Northern Ireland25130 Posts
July 10 2019 19:04 GMT
#14
Brb changing my ID to AlphaStar for intimidation factor
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Quasarrion
Profile Joined July 2018
60 Posts
July 10 2019 19:06 GMT
#15
Imagine, if they made a tournament alphastar vs the world, similar format to gsl vs the world. Pick the best 8 players and fight for the human race. If they win they get 50 k.
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
July 10 2019 19:06 GMT
#16
Very interesting. And they've adjusted the APM cap, so we hopefully shouldn't see AlphaStar spike to 1000 APM and simultaneously micro stalkers all over the map during big fights anymore.
egrimm
Profile Joined September 2011
Poland1199 Posts
July 10 2019 19:13 GMT
#17
Bot/hacking report threads will be swarmed with new posts
sOs TY PartinG
argonautdice
Profile Joined January 2013
Canada2718 Posts
July 10 2019 19:47 GMT
#18
I thought MaxPax has been terrorizing the ladder for a while now
very illegal and very uncool
ClaudeSc2
Profile Joined May 2014
United States73 Posts
July 10 2019 20:52 GMT
#19
I would be all about this, but against Mana it looked like a primitive micro bot - controlling 50 stalkers across 4 screens at once - ended up losing a game to him walking it's army in and out of its base over and over again because it didn't have vision of his camping warp prism.

I also don't play on EU...so I guess I don't even have the option.
alexanderzero
Profile Joined June 2008
United States659 Posts
July 10 2019 21:22 GMT
#20
I would be all about this, but against Mana it looked like a primitive micro bot - controlling 50 stalkers across 4 screens at once - ended up losing a game to him walking it's army in and out of its base over and over again because it didn't have vision of his camping warp prism.


There was definitely something wrong with the AI in that last match, but overall the description of AlphaStar as being just a primitive micro bot is very unfair. I think the matches against TLO prove handily that AlphaStar has high level reasoning and decision making ablity, and understands well how to play Starcraft 2 in a comprehensive manner.

It will be interesting to see how the new version does on the ladder. It apparently has more mechanical limitations than the version that went 10-0 against TLO and Mana in the non-public matches.
I am a tournament organizazer.
Riner1212
Profile Joined November 2012
United States337 Posts
July 10 2019 21:48 GMT
#21
would be nice seeing marine splits vs lings and banes.
Sjow "pretty ez life as protoss"
imJealous
Profile Joined July 2010
United States1382 Posts
July 10 2019 22:25 GMT
#22
And so the end of the world begins!
... In life very little goes right. "Right" meaning the way one expected and the way one wanted it. One has no right to want or expect anything.
ShambhalaWar
Profile Joined August 2013
United States930 Posts
July 10 2019 22:34 GMT
#23
Cool option... but get ready to contribute to the fall of mankind.

Is there a version of the ladder in which we can help it design effective weapons to use against us?
Xamo
Profile Joined April 2012
Spain877 Posts
Last Edited: 2019-07-10 22:45:43
July 10 2019 22:45 GMT
#24
Clearly all AlphaStar bots will reach GM easily. They are going to play against most of our known streamers. Let's try to identify them!
This could be easy, if 20 AlphaStar bots take the 20 top positions in GM...
My life for Aiur. You got a piece of me, baby. IIIIIIiiiiiii.
Kalera
Profile Joined January 2018
United States338 Posts
Last Edited: 2019-07-10 23:03:21
July 10 2019 23:03 GMT
#25
On July 11 2019 07:45 Xamo wrote:
Clearly all AlphaStar bots will reach GM easily. They are going to play against most of our known streamers. Let's try to identify them!
This could be easy, if 20 AlphaStar bots take the 20 top positions in GM...


It's quite possible that AlphaStar agents will be excluded from appearing on ladder rankings entirely.
jy_9876543210
Profile Joined March 2016
265 Posts
July 10 2019 23:32 GMT
#26
Hello just to leave my name in history here....
The skynet has started - Resistance is futile!
Phase 1: F2    Phase 2: A   Phase 3: Profit!
pzlama333
Profile Joined April 2013
United States277 Posts
July 10 2019 23:48 GMT
#27
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.
jy_9876543210
Profile Joined March 2016
265 Posts
July 10 2019 23:51 GMT
#28
On July 11 2019 08:48 pzlama333 wrote:
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.

The problem remains: the balance at the non-human AI APM is totally different from that at human level. With that 2000 peak AI apm, blink stalker are way too OP for sure.
Phase 1: F2    Phase 2: A   Phase 3: Profit!
Chronopolis
Profile Joined April 2009
Canada1484 Posts
July 11 2019 00:37 GMT
#29
On July 11 2019 08:51 jy_9876543210 wrote:
Show nested quote +
On July 11 2019 08:48 pzlama333 wrote:
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.

The problem remains: the balance at the non-human AI APM is totally different from that at human level. With that 2000 peak AI apm, blink stalker are way too OP for sure.


The AI's APM was limited to like 250 or so. Nonetheless, the efficiency at which they can control units on multiple screens gives them a huge advantage with using mass blink stalker. BUT that was with the non-camera restricted version. Not sure how much of an effect the camera restriction has.
lisuiasdf
Profile Joined August 2014
China3 Posts
July 11 2019 01:09 GMT
#30
I guess AlphaStars won't be hard to identify. If they created a new account for each AlphaStar, some players would recognize them maybe due to AlphaStars' playstyle or not responding well to chats, and post them in forums so everyone would know. If they were not real accounts, then you won't be able to see their game history or other account information, and this would be a immediate give away.

Maybe one day AlphaStar got better and SC2 became dead, then Blizzard would fill the ladder with AlphaStars of skill levels ranging from bronze to GM. They would be able to adjust active user count and pretend the game is still alive. lol
Loccstana
Profile Blog Joined November 2012
United States833 Posts
July 11 2019 01:34 GMT
#31
Cool, I hope Deepmind will release some vods where AlphaStar defeats Serral.
[url]http://i.imgur.com/lw2yN.jpg[/url]
jy_9876543210
Profile Joined March 2016
265 Posts
Last Edited: 2019-07-11 01:39:15
July 11 2019 01:36 GMT
#32
On July 11 2019 09:37 Chronopolis wrote:
Show nested quote +
On July 11 2019 08:51 jy_9876543210 wrote:
On July 11 2019 08:48 pzlama333 wrote:
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.

The problem remains: the balance at the non-human AI APM is totally different from that at human level. With that 2000 peak AI apm, blink stalker are way too OP for sure.


The AI's APM was limited to like 250 or so. Nonetheless, the efficiency at which they can control units on multiple screens gives them a huge advantage with using mass blink stalker. BUT that was with the non-camera restricted version. Not sure how much of an effect the camera restriction has.

No, that's the average APM of the whole match, I suppose? But what matters is the peak APM. You can keep your APM < 100 for the entire match, and at the last minute you raise your APM to 2000 and kill the entire army of your opponent with blink stalkers from 3 directions. Basically that's what happened in a match of AlphaStar vs Mana.
The DeepMind team argued that human may also have such high peak APM - as TLO's APM chart showed. But the problem is, that APM peak happens because when he need to produce, say, a bunch of zerglings, he can just hold down the hotkey, and he can make 50 zerglings within half a second. Most of keyboards doesn't allow this, because when you type "abc", there is an input delay, otherwise you'll most likely type "aaabbcc" because the pulling rate is too high for human reaction. So if you hold down "a", you'll see that the first "a" appear immediately and after a small delay the rest of "a"s start to show up. But it's bad for SC2 gamers, especially zerg players because you often need to make a lot of units ASAP. So there exists such "accelerated" keyboards that can remove the delay, which is perfect for mass producing units. A lot of zerg players use it, and that APM most likely comes from producing a bunch of units instead of from a battle, I believe.
So basically, that AI had inhuman APM.
Phase 1: F2    Phase 2: A   Phase 3: Profit!
yubo56
Profile Joined May 2014
687 Posts
July 11 2019 02:13 GMT
#33
On July 11 2019 10:36 jy_9876543210 wrote:
Show nested quote +
On July 11 2019 09:37 Chronopolis wrote:
On July 11 2019 08:51 jy_9876543210 wrote:
On July 11 2019 08:48 pzlama333 wrote:
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.

The problem remains: the balance at the non-human AI APM is totally different from that at human level. With that 2000 peak AI apm, blink stalker are way too OP for sure.


The AI's APM was limited to like 250 or so. Nonetheless, the efficiency at which they can control units on multiple screens gives them a huge advantage with using mass blink stalker. BUT that was with the non-camera restricted version. Not sure how much of an effect the camera restriction has.

No, that's the average APM of the whole match, I suppose? But what matters is the peak APM. You can keep your APM < 100 for the entire match, and at the last minute you raise your APM to 2000 and kill the entire army of your opponent with blink stalkers from 3 directions. Basically that's what happened in a match of AlphaStar vs Mana.
The DeepMind team argued that human may also have such high peak APM - as TLO's APM chart showed. But the problem is, that APM peak happens because when he need to produce, say, a bunch of zerglings, he can just hold down the hotkey, and he can make 50 zerglings within half a second. Most of keyboards doesn't allow this, because when you type "abc", there is an input delay, otherwise you'll most likely type "aaabbcc" because the pulling rate is too high for human reaction. So if you hold down "a", you'll see that the first "a" appear immediately and after a small delay the rest of "a"s start to show up. But it's bad for SC2 gamers, especially zerg players because you often need to make a lot of units ASAP. So there exists such "accelerated" keyboards that can remove the delay, which is perfect for mass producing units. A lot of zerg players use it, and that APM most likely comes from producing a bunch of units instead of from a battle, I believe.
So basically, that AI had inhuman APM.

They said they changed restrictions since that match after chatting with progamers, certain this was raised. Wonder how it was addressed

Also, I'd guess they release not just their top boys but bots distributed across their training ladder. Would benchmark the strength of their ladder and thus training efficiency, e.g. "1 million games played produces bots 6k MMR and above 50% of the time"
Jung Yoon Jong fighting, even after retirement! Feel better soon.
KingofdaHipHop
Profile Blog Joined October 2013
United States25602 Posts
July 11 2019 02:33 GMT
#34
Something something skynet. Glad that the AI takeover of the world will all have started with them blinking in our mains and stealing our ladder points. Gotta start somewhere
Rain | herO | sOs | Dear | Neeb | ByuN | INnoVation | Dream | ForGG | Maru | ByuL | Golden | Solar | Soulkey | Scarlett!!!
gpanda
Profile Joined December 2017
36 Posts
July 11 2019 03:03 GMT
#35
SC2 needs both micro skills and dynamic macro planning. A game result is determined by the both competitions. IMHO, AlphasStar research project has always been focusing on the macro par. If a game is won by AS *mostly* because of its micro skills, then this winning is meaningless to the target of the project. However, I think it is difficult to separate and quantify the both winning factors(micro & macro), and how much is "mostly". Thus, how to define fairness and how to constrain micro winning factor is the most important and meaningful work AlphaStar team should do now. Without clear and convincing definition, it is hard to say what we can expect from this whole research project.
Maybe, in the end, SC2 is found not appropriate game for this AI project.
NinjaNight
Profile Joined January 2018
428 Posts
July 11 2019 03:09 GMT
#36
Sounds like it's very rare to have a game against Alphastar? You can't just directly challenge it? How do you even know for sure if its Alphastar?
jy_9876543210
Profile Joined March 2016
265 Posts
July 11 2019 03:14 GMT
#37
On July 11 2019 11:13 yubo56 wrote:
Show nested quote +
On July 11 2019 10:36 jy_9876543210 wrote:
On July 11 2019 09:37 Chronopolis wrote:
On July 11 2019 08:51 jy_9876543210 wrote:
On July 11 2019 08:48 pzlama333 wrote:
I would like to know which race may have higher win rate if AlphaStar fight themselves using different race against each others, so it may help the balance team.

The problem remains: the balance at the non-human AI APM is totally different from that at human level. With that 2000 peak AI apm, blink stalker are way too OP for sure.


The AI's APM was limited to like 250 or so. Nonetheless, the efficiency at which they can control units on multiple screens gives them a huge advantage with using mass blink stalker. BUT that was with the non-camera restricted version. Not sure how much of an effect the camera restriction has.

No, that's the average APM of the whole match, I suppose? But what matters is the peak APM. You can keep your APM < 100 for the entire match, and at the last minute you raise your APM to 2000 and kill the entire army of your opponent with blink stalkers from 3 directions. Basically that's what happened in a match of AlphaStar vs Mana.
The DeepMind team argued that human may also have such high peak APM - as TLO's APM chart showed. But the problem is, that APM peak happens because when he need to produce, say, a bunch of zerglings, he can just hold down the hotkey, and he can make 50 zerglings within half a second. Most of keyboards doesn't allow this, because when you type "abc", there is an input delay, otherwise you'll most likely type "aaabbcc" because the pulling rate is too high for human reaction. So if you hold down "a", you'll see that the first "a" appear immediately and after a small delay the rest of "a"s start to show up. But it's bad for SC2 gamers, especially zerg players because you often need to make a lot of units ASAP. So there exists such "accelerated" keyboards that can remove the delay, which is perfect for mass producing units. A lot of zerg players use it, and that APM most likely comes from producing a bunch of units instead of from a battle, I believe.
So basically, that AI had inhuman APM.

They said they changed restrictions since that match after chatting with progamers, certain this was raised. Wonder how it was addressed

Also, I'd guess they release not just their top boys but bots distributed across their training ladder. Would benchmark the strength of their ladder and thus training efficiency, e.g. "1 million games played produces bots 6k MMR and above 50% of the time"

Oh, really? That would make more sense. I wonder what are the new restrictions too.
Phase 1: F2    Phase 2: A   Phase 3: Profit!
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 11 2019 03:27 GMT
#38
On July 11 2019 12:09 NinjaNight wrote:
Sounds like it's very rare to have a game against Alphastar? You can't just directly challenge it? How do you even know for sure if its Alphastar?

They are purposely trying to keep it secret.

When you hit ranked on EU there is a chance your get alphastar.

(If you opt in to play it)
Ketroc
Profile Joined May 2010
Canada74 Posts
July 11 2019 04:03 GMT
#39
Do we know any of its ladder account(s) yet?
SC2 Videos: www.youtube.com/ketroc SC2 Stream: www.twitch.tv/ketroc
fastr
Profile Joined February 2011
France901 Posts
Last Edited: 2019-07-11 04:09:32
July 11 2019 04:06 GMT
#40
Assuming the agents (smith) quickly climb to top of GM, I give it a week before we identify most of them. I understand the necessity of anonymity in order to get a controlled test, I'm just not sure it's possible.

When top ranked players stream and play against bar codes, it's already a common topic on twitch chat to try and guess who they are playing against. Given AlphaStar peculiar style of play and lack of in game chat, it shouldn't be hard to spot them. Hell, just ask your opponent if he's AlphaStar at the beginning of the game and you'll most likely get an answer. Looking at replays and analyzing play style and hotkey usage should remove any remaining doubts.

If I was in the DeepMind team I would name some of the agents Serral Maru and Stats for the lulz and ask them not to spill the beans!
Boggyb
Profile Joined January 2017
2855 Posts
July 11 2019 04:23 GMT
#41
For the sake of people who run into AlphaStar on the ladder, I hope they've instructed it to leave the game if the winning percentage drops unreasonably low rather than sticking around forcing players to kill literally every building.
necrosexy
Profile Joined March 2011
451 Posts
July 11 2019 05:15 GMT
#42
easy to spot, just do warp prism harass
Zerg.Zilla
Profile Joined February 2012
Hungary5029 Posts
July 11 2019 05:19 GMT
#43
On July 11 2019 03:52 sugarmuffinpuff wrote:
I only play opponents whose mothers I can insult when I lose.

lol
(•_•) ( •_•)>⌐■-■ (⌐■_■) ~Keep calm and inject Larva~
Mountain_Lee
Profile Joined January 2018
87 Posts
July 11 2019 07:57 GMT
#44
On July 11 2019 14:19 Zerg.Zilla wrote:
Show nested quote +
On July 11 2019 03:52 sugarmuffinpuff wrote:
I only play opponents whose mothers I can insult when I lose.

lol

lol
Harris1st
Profile Blog Joined May 2010
Germany6916 Posts
July 11 2019 07:58 GMT
#45
On July 11 2019 10:34 Loccstana wrote:
Cool, I hope Deepmind will release some vods where AlphaStar defeats Serral.


Never gonna happen. Serral is the T1000
Go Serral! GG EZ for Ence. Flashbang dance FTW
Bomzj
Profile Joined July 2018
Belarus24 Posts
Last Edited: 2019-07-11 07:59:04
July 11 2019 07:58 GMT
#46
Add to the Team games as well!
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 11 2019 08:52 GMT
#47
The real test is whether it rages and talks trash when it loses.
ZenithM
Profile Joined February 2011
France15952 Posts
Last Edited: 2019-07-11 10:58:37
July 11 2019 10:51 GMT
#48
I must say I'm quite jealous of the researchers working on this. In this field, it's quite rare that you get to work with such a combination of computing power (the Google cloud stuff) AND a potentially massive amount of interactions with human users.
It's great publicity for Google's R&D (I know Deepmind is an Alphabet subsidiary, but let's not kid ourselves there ).
seemsgood
Profile Joined January 2016
5527 Posts
July 11 2019 12:44 GMT
#49
On July 11 2019 16:58 Harris1st wrote:
Show nested quote +
On July 11 2019 10:34 Loccstana wrote:
Cool, I hope Deepmind will release some vods where AlphaStar defeats Serral.


Never gonna happen. Serral is the T1000

but da A.I is arnold schwarzenegger
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2019-07-11 13:18:07
July 11 2019 12:44 GMT
#50
So they are going to let google research on the ladder but all other AI creators are forbidden from playing on the ladder?

Well, I guess that's the advantage of being google and having the best AI.
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 11 2019 13:24 GMT
#51
On July 11 2019 21:44 travis wrote:
So they are going to let google research on the ladder but all other AI creators are forbidden from playing on the ladder?

Well, I guess that's the advantage of being google and having the best AI.

Nobody at Google or at Actiblizzard claimed they were trying to make a fair playing field for creating SC2 AI, so I don't really know why you're surprised. At the moment, it's great PR for both those companies.

And yes, I'm jealous too.
sudete
Profile Joined December 2012
Singapore3054 Posts
July 11 2019 14:53 GMT
#52
Would it be terribly wrong to cooperate and collate replays of alphastar playing against us? I feel like it would be really interesting to see how it is doing atm and how it develops along the way, rather than having them show us only the stuff that they want us to see.

Assuming the bot gets high enough on the ladder, it should be fairly obvious to progamers / hardcore players even if the agents use a phoney name
Year of MaxPax
zatic
Profile Blog Joined September 2007
Zurich15325 Posts
July 11 2019 15:09 GMT
#53
Thanks Blizz! Every time I lose against a barcode I'll claim "this must have been AlphaStar!".
ModeratorI know Teamliquid is known as a massive building
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
July 11 2019 15:11 GMT
#54
These agents will not reach GM if they have been properly restricted in peak and average APM, not able to see all map at all times and are not able to micro outside of their vision.

Because the above factors were not restricted or not enough restricted Alphastar managed to defeat TLO and Mana but the reason it won was sheer micro and because it had access to vision no human could have. If they have made Alphastar more fair and actually wants to try and create an AI that can handle the stracraft game of strategic decisionmaking based on limited information there is no way alphastar can already be GM level.

I do think they can reach master but I think most agenst will be around dia level, just pure guesswork from my side but we will see.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
NinjaNight
Profile Joined January 2018
428 Posts
July 11 2019 15:46 GMT
#55
I'm just trying to figure out if it's worth que'ing to play AlphaStar or will it be too rare to get a match against it?
jalstar
Profile Blog Joined September 2009
United States8198 Posts
July 11 2019 16:00 GMT
#56
It should be possible to detect whether you just played alphastar since the bnet ID of your opponent is extractable from the replay. If people keep a list of which IDs are actually alphastar then they can release these replays potentially. If they hide the bots from ladder then the ID will be invalid, which is still a tipoff.

Also, bnet IDs are readable from memory when you're in-game, but this may be against TOS.
alexanderzero
Profile Joined June 2008
United States659 Posts
July 11 2019 16:03 GMT
#57
On July 12 2019 00:46 NinjaNight wrote:
I'm just trying to figure out if it's worth que'ing to play AlphaStar or will it be too rare to get a match against it?


If you're not a high masters player already then it's unlikely you will ever get a match. The people who are ranked highly enough are already playing on the ladder for several hours a day.
I am a tournament organizazer.
NinjaNight
Profile Joined January 2018
428 Posts
July 11 2019 16:19 GMT
#58
On July 12 2019 01:03 alexanderzero wrote:
Show nested quote +
On July 12 2019 00:46 NinjaNight wrote:
I'm just trying to figure out if it's worth que'ing to play AlphaStar or will it be too rare to get a match against it?


If you're not a high masters player already then it's unlikely you will ever get a match. The people who are ranked highly enough are already playing on the ladder for several hours a day.


I'm not sure about that. It's possible (most likely??) that it's only around diamond level or something now thanks to the further APM restrictions and camera restrictions now. I doubt it's going to be GM yet. Also a deepmind guy in the general SC2AI discord recently said "don't expect too much from us when it reaches ladder" which suggests it doesn't have grandmaster ability.

So the first question is if AlphaStar is in diamond and you're in diamond are you likely to run into it after say 8 hours of laddering or is there too many other people near that level queing so you're still unlikely to face it?

Second question is are the different versions of AlphaStar going to be dispersed among different leagues like gold, plat, diamond, and masters?

We could start there.
UnLarva
Profile Joined March 2019
458 Posts
July 11 2019 16:54 GMT
#59
I assume they won't run AlphaStar agents in ladder with similar computational resources they did when the project was under development. Are these agents installed/integrated to Blizzard's servers or does they still operate at Google's supercomputers requiring extra internet connection between the agents' home server and Blizzard's home/ladder server, or are these both in same geographic location?

I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2.
Part-time Serralogist
Acrofales
Profile Joined August 2010
Spain17979 Posts
Last Edited: 2019-07-11 17:17:39
July 11 2019 17:11 GMT
#60
On July 12 2019 01:54 UnLarva wrote:
I assume they won't run AlphaStar agents in ladder with similar computational resources they did when the project was under development. Are these agents installed/integrated to Blizzard's servers or does they still operate at Google's supercomputers requiring extra internet connection between the agents' home server and Blizzard's home/ladder server, or are these both in same geographic location?

I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2.

Depends on how many players there are on the ladder and what Blizzard lets them do. I don't know how many games they could run in parallel to train their networks, but if they dedicate the same resources to ladder games they can play orders of magnitude more in parallel due to each game being far slower and thus needing far less resources to still reach the maximum APM they set. I'm going to assume blizzard doesn't let them run millions of instantiations of a bot all at once, though, so training should indeed slow down.

That said, we don't even know what they are doing. This might be validation. It might just be collecting training data on human vs AI play which they can use to train further, it might just be for PR. Or they are testing some new online learning method that can learn from what is happening in the game (and thus, could, in theory, eventually learn to avoid f2ing continuously to chase warp prisms).

E: actually we do know what they're doing, and they aren't training:


Q. How many variants of AlphaStar will play?

A. DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period.

Q. Will AlphaStar improve as it plays on the ladder? Will my games be used to help improve its strategy?

A. AlphaStar will not be learning from the games it plays on the ladder, as DeepMind is not using these matches as part of AlphaStar’s training. To date, AlphaStar has been trained from human replays and self-play, not from matches against human players.
oXoCube
Profile Joined October 2010
Canada197 Posts
July 11 2019 17:15 GMT
#61
If it's actually at a high master or better level(personally wouldn't really doubt it) the agents won't stay hidden for long.

Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?
sekalf
Profile Joined June 2010
Sweden522 Posts
July 11 2019 17:16 GMT
#62
Very cool!
Xamo
Profile Joined April 2012
Spain877 Posts
July 11 2019 17:17 GMT
#63
On July 12 2019 01:54 UnLarva wrote:
I assume they won't run AlphaStar agents in ladder with similar computational resources they did when the project was under development. Are these agents installed/integrated to Blizzard's servers or does they still operate at Google's supercomputers requiring extra internet connection between the agents' home server and Blizzard's home/ladder server, or are these both in same geographic location?

I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2.

The Battle.net post says that AlphaStar does not learn from ladder games. It is trained before.
My life for Aiur. You got a piece of me, baby. IIIIIIiiiiiii.
renaissanceMAN
Profile Joined March 2011
United States1840 Posts
July 11 2019 17:28 GMT
#64
Has anyone played it yet?
On August 15 2013 03:43 Waxangel wrote: no amount of money can replace the enjoyment of being mean to people on the internet
Penev
Profile Joined October 2012
28475 Posts
July 11 2019 17:33 GMT
#65
On July 12 2019 02:15 oXoCube wrote:
If it's actually at a high master or better level(personally wouldn't really doubt it) the agents won't stay hidden for long.

Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Hope so, I'd like to watch some of that weirdness
I Protoss winner, could it be?
UnLarva
Profile Joined March 2019
458 Posts
July 11 2019 17:40 GMT
#66
Acrofales and Xamo:

I see. Agents aren't going to learn. They are what they are and when spotted by human players, specific tactics could maybe developed to abuse their traits.

Computational resources required are minuscule compared to actively learning neural network. They are just bots developed via evolutionary means.
Part-time Serralogist
Elentos
Profile Blog Joined February 2015
55511 Posts
July 11 2019 17:54 GMT
#67
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.
Every 60 seconds in Africa, a minute passes.
Penev
Profile Joined October 2012
28475 Posts
July 11 2019 18:03 GMT
#68
On July 12 2019 02:54 Elentos wrote:
Show nested quote +
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.

That wasn't the only weird thing it did, I wonder if it still overbuilds workers. Also it plays all the races now so it might do some new weird shit as well.
I Protoss winner, could it be?
oXoCube
Profile Joined October 2010
Canada197 Posts
July 11 2019 18:07 GMT
#69
On July 12 2019 02:54 Elentos wrote:
Show nested quote +
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.


One very valid but highly specific criticism of one specific game it played does not undo the premise.

I would also be pretty willing to bet that the new Protoss versions will still have a huge love affair with blink stalkers. Micro will always be the biggest strength of any good AI.
alpenrahm
Profile Blog Joined December 2010
Germany628 Posts
July 11 2019 18:27 GMT
#70
On July 12 2019 01:54 UnLarva wrote:
I assume they won't run AlphaStar agents in ladder with similar computational resources they did when the project was under development. Are these agents installed/integrated to Blizzard's servers or does they still operate at Google's supercomputers requiring extra internet connection between the agents' home server and Blizzard's home/ladder server, or are these both in same geographic location?

I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2.



the actual agent runs on a standart deskop pc. Thats the big selling point of neural networks anyway. they require loads of data and analysis to train but once they actually run its a different story.
FFW_Rude
Profile Blog Joined November 2010
France10201 Posts
Last Edited: 2019-07-11 18:31:17
July 11 2019 18:29 GMT
#71
Now imagine if it also learns manner from the ladder...

Good thing they said it will not learn anything from those matches.

But for sure it will say gl hf
#1 KT Rolster fanboy. KT BEST KT ! Hail to KT playoffs Zergs ! Unofficial french translator for SlayerS_`Boxer` biography "Crazy as me".
Elentos
Profile Blog Joined February 2015
55511 Posts
Last Edited: 2019-07-11 18:39:46
July 11 2019 18:34 GMT
#72
On July 12 2019 03:07 oXoCube wrote:
Show nested quote +
On July 12 2019 02:54 Elentos wrote:
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.


One very valid but highly specific criticism of one specific game it played does not undo the premise.

I would also be pretty willing to bet that the new Protoss versions will still have a huge love affair with blink stalkers. Micro will always be the biggest strength of any good AI.

It's primarily supposed to end up as an AI that strategizes though and not a micro bot. Eventually with capped APM that should mean less micro, more macro and positioning.

However I'm more than curious whether the thing has learnt to leave games at a reasonable timing yet or if it still waits until you kill all its buildings. And what does it lean towards? Fantasy GG or Idra GG?
Every 60 seconds in Africa, a minute passes.
oXoCube
Profile Joined October 2010
Canada197 Posts
Last Edited: 2019-07-11 18:41:14
July 11 2019 18:40 GMT
#73
On July 12 2019 03:34 Elentos wrote:
Show nested quote +
On July 12 2019 03:07 oXoCube wrote:
On July 12 2019 02:54 Elentos wrote:
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.


One very valid but highly specific criticism of one specific game it played does not undo the premise.

I would also be pretty willing to bet that the new Protoss versions will still have a huge love affair with blink stalkers. Micro will always be the biggest strength of any good AI.

It's primarily supposed to end up as an AI that strategizes though and not a micro bot. Eventually with capped APM that should mean less micro, more macro and positioning.


Yes.

It's still going to have top notch mechanics though, it's a bot. The goal is to get the mechanical strength into a human range not cripple it.

EDIT: Agree with your edit. I'm sure it occured to the team that it waiting for someone to kill all of it's buildings before leaving would be a dead giveaway as to what that person is playing against though.
skdsk
Profile Joined February 2019
138 Posts
July 11 2019 19:55 GMT
#74
i wonder how it plays terran as its race that greatly benefits from micro and good macro habits, but its also very tempo based like you need to do drops and abuse your units high damage potential. Will these AI develop interesting playstyle or just do insane marine splits(boring bot tier uninteresting stuff)
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 11 2019 20:25 GMT
#75
What are the flags that people should be looking for?

Does it use hotkeys? If not, that is a give away.

I hope we are able to do what we have always done.... track down and identify the accounts or individual reps.

If we can do it for 20 years of secretive pro gamers, we can surely do it for alphastar.

Happy Hunting my European brothers and sisters.

I hope you talented fawks choke out this skynet wannabe!
jimminy_kriket
Profile Blog Joined February 2007
Canada5501 Posts
Last Edited: 2019-07-11 21:16:47
July 11 2019 21:16 GMT
#76
On July 12 2019 05:25 AttackZerg wrote:
What are the flags that people should be looking for?

Does it use hotkeys? If not, that is a give away.

I hope we are able to do what we have always done.... track down and identify the accounts or individual reps.

If we can do it for 20 years of secretive pro gamers, we can surely do it for alphastar.

Happy Hunting my European brothers and sisters.

I hope you talented fawks choke out this skynet wannabe!

Spikes of incredibly high apm if the show marches are anything to go by. Alongside god tier micro. Not sure if it uses hotkeys or not, you might be able to tell by looking back at the show matches when it shows its first person view
life of lively to live to life of full life thx to shield battery
NinjaNight
Profile Joined January 2018
428 Posts
July 11 2019 21:28 GMT
#77
On July 12 2019 03:07 oXoCube wrote:
Show nested quote +
On July 12 2019 02:54 Elentos wrote:
On July 12 2019 02:15 oXoCube wrote:
Especially so if this version approaches the game as weirdly as the old ones did, and why wouldn't it?

Because it can't micro blink stalkers in every part of the map at the same time perfectly anymore due to more limited APM.


One very valid but highly specific criticism of one specific game it played does not undo the premise.

I would also be pretty willing to bet that the new Protoss versions will still have a huge love affair with blink stalkers. Micro will always be the biggest strength of any good AI.


That's a general criticism, not a specific one. Because it implies that AlphaStar learned to play the way it did because of its unfair mechanical advantage. Now if there is no mechanical advantage anymore it likely won't be able to get away with playing in such a strange way.
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 11 2019 21:47 GMT
#78
Upon checking the showmatch, at 1:14 minutes in they show mana v alphastar fpvods, looks like alphastar uses hotkeys in a very human fashion, but they do not show the replay variation where you can see which numbers it hotkeys what.

Looks like a deadend .
NinjaNight
Profile Joined January 2018
428 Posts
Last Edited: 2019-07-11 22:06:08
July 11 2019 22:05 GMT
#79
On July 12 2019 06:47 AttackZerg wrote:
Upon checking the showmatch, at 1:14 minutes in they show mana v alphastar fpvods, looks like alphastar uses hotkeys in a very human fashion, but they do not show the replay variation where you can see which numbers it hotkeys what.

Looks like a deadend .


Yea, I'm sure they removed any obvious clues that its not human to help their research and keep it anonymous. Anything so basic like just "does it use hotkeys?" is never going to be a giveaway.
Incand
Profile Joined November 2012
143 Posts
Last Edited: 2019-07-11 23:31:57
July 11 2019 23:31 GMT
#80
I noticed several high rated accounts with first game played the day of the announcement (10th of July).

You can find these easily by looking at EU players and sorting by win rate and you will find both sorting by GM and Master several players with first game that date with a very high winrate and rating.

Further evidence would be that they seem to already start at a very high rating not possible to reach with that few number of games, meaning they started playing with a high mmr (In the range of 4500-5800).

It is of course possible that a lot of players started accounts the same date to impersonate alphastar and indeed some of lower rated ones could very well be human players starting from placement matches. There is also possible to start at the higher rating by playing unranked first I assume so it is hard to say something definite even of the higher accounts but it certainly seems likely they could be alphastar.

It seems likely these accounts will continue to rise on the ladder, althought depending if there is not enough high mmr players opting in they may reach a ceiling.
seemsgood
Profile Joined January 2016
5527 Posts
July 12 2019 00:03 GMT
#81
On July 12 2019 06:47 AttackZerg wrote:
Upon checking the showmatch, at 1:14 minutes in they show mana v alphastar fpvods, looks like alphastar uses hotkeys in a very human fashion, but they do not show the replay variation where you can see which numbers it hotkeys what.

Looks like a deadend .

i dont think they use the same inhuman setting for this time cuz otherwise,there are not much things to learn from average master and g.m players if the A.I keeps winning most of the time
BronzeKnee
Profile Joined March 2011
United States5217 Posts
July 12 2019 00:23 GMT
#82
Getting a replay versus Alphastar is a literal gold mine.
Bub
Profile Blog Joined June 2006
United States3518 Posts
July 12 2019 02:07 GMT
#83
Anyone know Alpinestar's B ID#?
XK ßubonic
NinjaNight
Profile Joined January 2018
428 Posts
Last Edited: 2019-07-12 02:53:19
July 12 2019 02:53 GMT
#84
I don't think it's even playing on ladder yet because their message says "coming soon to the ladder" and who knows what soon even means to Blizzard lol, they're always slow.
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 12 2019 03:29 GMT
#85
EU players - is there an opt in feature for you'll yet?
designer
Profile Joined July 2019
1 Post
Last Edited: 2019-07-12 07:08:43
July 12 2019 07:04 GMT
#86
Yes you can opt in since at least 11.07 evening. Or 07.11. Whatever your format is
Harris1st
Profile Blog Joined May 2010
Germany6916 Posts
July 12 2019 09:51 GMT
#87
On July 11 2019 21:44 seemsgood wrote:
Show nested quote +
On July 11 2019 16:58 Harris1st wrote:
On July 11 2019 10:34 Loccstana wrote:
Cool, I hope Deepmind will release some vods where AlphaStar defeats Serral.


Never gonna happen. Serral is the T1000

but da A.I is arnold schwarzenegger


In all fairness, Arnie (T-800) should have never won that fight, but hollywood plot armor too stronk ^^
Go Serral! GG EZ for Ence. Flashbang dance FTW
alpenrahm
Profile Blog Joined December 2010
Germany628 Posts
July 12 2019 14:40 GMT
#88
so do we have any confirmed t1000 replays?
Khalimaroth
Profile Joined September 2010
France70 Posts
July 13 2019 08:23 GMT
#89
Cool!
Does AlphaStar's bot say "glhf" and "gg" before to leave? Or will we have to kill all the buildings to get a win?
Trop'inzust
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2019-07-13 08:33:54
July 13 2019 08:33 GMT
#90
On July 13 2019 17:23 Khalimaroth wrote:
Cool!
Does AlphaStar's bot say "glhf" and "gg" before to leave? Or will we have to kill all the buildings to get a win?

Even the normal Blizzard bot opponents say gg and realise when they lost, I'm sure you don't have to kill all of the buildings, AlphaStar always calculates it's winning odds at the moment.

Not sure if it says gl hf at the start.
Maru and Serral are probably top 5.
Elentos
Profile Blog Joined February 2015
55511 Posts
July 13 2019 08:45 GMT
#91
On July 13 2019 17:33 Musicus wrote:
Show nested quote +
On July 13 2019 17:23 Khalimaroth wrote:
Cool!
Does AlphaStar's bot say "glhf" and "gg" before to leave? Or will we have to kill all the buildings to get a win?

Even the normal Blizzard bot opponents say gg and realise when they lost

They don't like to offer you the win until you're literally killing the last 2 buildings either though.
Every 60 seconds in Africa, a minute passes.
Musicus
Profile Joined August 2011
Germany23576 Posts
July 13 2019 08:51 GMT
#92
On July 13 2019 17:45 Elentos wrote:
Show nested quote +
On July 13 2019 17:33 Musicus wrote:
On July 13 2019 17:23 Khalimaroth wrote:
Cool!
Does AlphaStar's bot say "glhf" and "gg" before to leave? Or will we have to kill all the buildings to get a win?

Even the normal Blizzard bot opponents say gg and realise when they lost

They don't like to offer you the win until you're literally killing the last 2 buildings either though.

Still better than Fantasy though.
Maru and Serral are probably top 5.
MarianoSC2
Profile Joined June 2015
Slovakia1855 Posts
July 13 2019 10:37 GMT
#93
On July 13 2019 17:51 Musicus wrote:
Show nested quote +
On July 13 2019 17:45 Elentos wrote:
On July 13 2019 17:33 Musicus wrote:
On July 13 2019 17:23 Khalimaroth wrote:
Cool!
Does AlphaStar's bot say "glhf" and "gg" before to leave? Or will we have to kill all the buildings to get a win?

Even the normal Blizzard bot opponents say gg and realise when they lost

They don't like to offer you the win until you're literally killing the last 2 buildings either though.

Still better than Fantasy though.


lol
Top 11: Rogue, Maru, Inno, Zest, Life, sOs, Stats, Dark, soO, Mvp, Classic/Trap/MC/Rain
FueledUpAndReadyToGo
Profile Blog Joined March 2013
Netherlands30548 Posts
July 13 2019 11:44 GMT
#94
I think the trick is to type 'sigh, lost to Alphastar again, nothing I can do' before every loss. If they mock you the opponent is human.
Neosteel Enthusiast
Malongo
Profile Blog Joined November 2005
Chile3472 Posts
July 13 2019 14:51 GMT
#95
thrash talking in game is going to be hilarius.
Help me! im still improving my English. An eye for an eye makes the whole world blind. M. G.
Argonauta
Profile Joined July 2016
Spain4906 Posts
July 13 2019 15:08 GMT
#96
It will be interesting if they use different versions of the same agent but setting different parameters than emulate mechanical skill such as apm, control group usage, cursor speed, camera hotkeys etc... to measure how raw mechanical skill of players affect their rank
Rogue | Maru | Scarlett | Trap
TL+ Member
Edpayasugo
Profile Joined April 2013
United Kingdom2214 Posts
July 13 2019 15:34 GMT
#97
So does it play random or rotates through the races?
FlaSh MMA INnoVation FanTaSy MKP TY Ryung | soO Dark Rogue | HuK PartinG Stork State
jimminy_kriket
Profile Blog Joined February 2007
Canada5501 Posts
July 13 2019 20:29 GMT
#98
We don't know, there are no confirmed sightings on ladder yet.
life of lively to live to life of full life thx to shield battery
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 13 2019 20:47 GMT
#99
I have checked this thread so many times... I think I need to stop with the f5s.

Every time the thread is bumped I get a semi...
Cyro
Profile Blog Joined June 2011
United Kingdom20285 Posts
July 14 2019 04:58 GMT
#100
On July 14 2019 00:34 Edpayasugo wrote:
So does it play random or rotates through the races?


Probably rotate, random matchups aren't as high quality
"oh my god my overclock... I got a single WHEA error on the 23rd hour, 9 minutes" -Belial88
Ronski
Profile Joined February 2011
Finland266 Posts
July 14 2019 17:13 GMT
#101
On July 14 2019 00:34 Edpayasugo wrote:
So does it play random or rotates through the races?


If its anything like the showmatches in the past there are probably multiple different alphastar "agents" playing the ladder at the same time during their testing. Agent 1 might be playing as terran, agent 2 as protoss and so forth, each with their own strategies. Getting matched with the same agent multiple times in a row wont make it change anything, it will be just like facing the same player on the ladder multiple times in a row except the agent will show less variety in its style.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
BjoernK
Profile Joined April 2012
194 Posts
July 14 2019 21:06 GMT
#102
No new infos here...
I would not be surprised if the ai team were to take care of the chat, and maybe also the gg timing. It would make sense if the total number of games were in the hundreds which I believe they will be.
NinjaNight
Profile Joined January 2018
428 Posts
July 15 2019 03:41 GMT
#103
Why is this thread so slow, this is huge and really cool.

Also, I don't see why there wouldn't be more than hundreds of games total because they're apparently going to have many different agents on the ladder? And each of those should play plenty of games. Pros play like 30-40 games in just a single day.
necrosexy
Profile Joined March 2011
451 Posts
July 15 2019 06:57 GMT
#104
it will ascend quickly until it gets waylaid in cheese purgatory
Goolpsy
Profile Joined November 2010
Denmark301 Posts
Last Edited: 2019-07-15 14:08:13
July 15 2019 14:08 GMT
#105
Turns out, It's all a big social experiment, to see if GM's start behaving better or worse on the ladder.
Or to pull more people to play..
alexanderzero
Profile Joined June 2008
United States659 Posts
July 16 2019 23:49 GMT
#106
Why is this thread so slow, this is huge and really cool.


Unfortunately we have so little info that there's just not much to talk about right now. I'm dying for the replay pack that they're going to release though. I had so much fun watching all of the replays from the showmatches earlier this year.
I am a tournament organizazer.
WombaT
Profile Blog Joined May 2010
Northern Ireland25130 Posts
July 17 2019 00:29 GMT
#107
They won’t be truly sentient until they can BM and balance whine with the rest of them.

I imagine the Protoss iterations of AlphaStar will behave with the class and decorum typical of Protoss players, not so confident about those Z and T ones though
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 17 2019 00:40 GMT
#108
The idea of a zerg with a thousand eyes... sounds terrifying.

I imagine we will see some 1 2 3 punch your dead zerg tech switches.

In my mind, I see a Dark with better scouting.

My big guess.... pool first in every matchup. Or gas-pool.
jimminy_kriket
Profile Blog Joined February 2007
Canada5501 Posts
July 18 2019 20:54 GMT
#109
How the hell do we not have an ID or replay yet??
life of lively to live to life of full life thx to shield battery
NinjaNight
Profile Joined January 2018
428 Posts
July 18 2019 22:42 GMT
#110
On July 19 2019 05:54 jimminy_kriket wrote:
How the hell do we not have an ID or replay yet??


Maybe it's not actually playing ladder yet and they just sent out the opt-in/opt-out message option early. They want to keep it anonymous, so it seems likely they'd throw Alphastar on there later when people aren't expecting it. Everyone is expecting it on ladder the first week when they see that message.
Lexender
Profile Joined September 2013
Mexico2628 Posts
July 18 2019 23:01 GMT
#111
On July 19 2019 05:54 jimminy_kriket wrote:
How the hell do we not have an ID or replay yet??


They wan't it to be anonymous, wich makes a lot of sense, if its opponent knew its AlphaStar they would start doing weird shit on purpose and it would corrupts its learning process.

Internet has a history of fucking up AIs after all.
WombaT
Profile Blog Joined May 2010
Northern Ireland25130 Posts
July 19 2019 01:01 GMT
#112
On July 19 2019 05:54 jimminy_kriket wrote:
How the hell do we not have an ID or replay yet??

They’ve so successfully taught it how to BM properly that nobody has yet even suspected they’re playing an AI?
'You'll always be the cuddly marsupial of my heart, despite the inherent flaws of your ancestry' - Squat
Harris1st
Profile Blog Joined May 2010
Germany6916 Posts
July 19 2019 09:31 GMT
#113
On July 19 2019 08:01 Lexender wrote:
Show nested quote +
On July 19 2019 05:54 jimminy_kriket wrote:
How the hell do we not have an ID or replay yet??


They wan't it to be anonymous, wich makes a lot of sense, if its opponent knew its AlphaStar they would start doing weird shit on purpose and it would corrupts its learning process.

Internet has a history of fucking up AIs after all.


It is not learning, just testing
Go Serral! GG EZ for Ence. Flashbang dance FTW
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 19 2019 10:27 GMT
#114
On July 19 2019 18:31 Harris1st wrote:
Show nested quote +
On July 19 2019 08:01 Lexender wrote:
On July 19 2019 05:54 jimminy_kriket wrote:
How the hell do we not have an ID or replay yet??


They wan't it to be anonymous, wich makes a lot of sense, if its opponent knew its AlphaStar they would start doing weird shit on purpose and it would corrupts its learning process.

Internet has a history of fucking up AIs after all.


It is not learning, just testing

It'd screw up tests too, though.
Pangpootata
Profile Blog Joined January 2011
1838 Posts
July 19 2019 12:26 GMT
#115
They should also train a neural network model dedicated for natural language processing for Alphastar to learn trashtalk.
Goolpsy
Profile Joined November 2010
Denmark301 Posts
July 19 2019 14:40 GMT
#116
It would screw up the tests... such as people flying empty Warp prisms near the Base of the AI, and the AI getting confused
(like we saw)
Many more weird strategies or actions will be certain to pop up.

That's why they have to keep it low-key to get useful information
Harris1st
Profile Blog Joined May 2010
Germany6916 Posts
July 24 2019 15:25 GMT
#117
Soooo are we any further on the b.net ID's of Alphastar ?
Go Serral! GG EZ for Ence. Flashbang dance FTW
TelecoM
Profile Blog Joined January 2010
United States10671 Posts
July 24 2019 15:27 GMT
#118
Woah wow, what MMR is it playing at?
AKA: TelecoM[WHITE] Protoss fighting
FrashQ
Profile Joined November 2011
53 Posts
July 24 2019 15:28 GMT
#119
Zerg: http://eu.battle.net/sc2/en/profile/8789693/1/IIIIIIlIIIIl/
Protoss: http://sc2replaystats.com/player/2704505
Terran: http://sc2replaystats.com/player/2704467
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
July 24 2019 15:40 GMT
#120
There's already some replays. Someone posted a few days ago about finding a suspicious bot
https://www.reddit.com/r/starcraft/comments/cgvu6r/i_played_against_alphastardeepmind/
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
alexanderzero
Profile Joined June 2008
United States659 Posts
July 24 2019 16:12 GMT
#121
Ahhh 11 replays so far!! I can't wait to watch these when I get home from work!
I am a tournament organizazer.
Ronski
Profile Joined February 2011
Finland266 Posts
Last Edited: 2019-07-24 17:28:02
July 24 2019 17:25 GMT
#122
After checking the replays I just feel like its either very disappointing if its actual Alphastar, or that it's just some random NA player laddering on EU.

1 ZVZ, it loses its hatch vs 13/12, then camps on 2 bases building roaches while it sees the opponent taking a 3rd and building 5th/6th gas before the hatch even finishes and never accounts for mutalisks untill it A-moves across the map with a ball of roaches, ignoring all injects from that point forward trying to micro its way to victory. Once it realizes it cant end the game it goes back to injecting but never built any spores / Hydra tech and just loses to mass mutas. Seems like a very "human" way to lose a game.

Edit: After this game I stopped paying too much attention because I was pretty sure its just some random NA player.

2nd ZvZ, it does the "Vibe bronze to GM" Roach speed +1attack build and manages to snipe the opponents 3rd. then it just expands and wins with roach / ravager vs Roach / ravager.

ZvT, loses drones to hellion runby / Liberator harass, then attacks with Roach / ravager / Queen +1 carapace and wins. Builds blind spores at 4mins.

ZvP Just a roach / Ravager ling / Queen all in. Never builds blind spores.

In all the games, it never scouted anything. the camera movement feels very human since its like scrolling the screen by draggin mouse around for small movements and using location hotkeys to jump between hatcheries. Has very poor creep spread, just a single tumor going in 1 or 2 directions and spreading it whenever it remembers to do it.

The bot says GL HF at different times in all the games. in 1 ZvZ it comes at 22seconds, 2nd it comes as a response to the opponent saying gl hf, in ZvT 15 seconds and ZvP 8 seconds into the games. Could be a human controlling the chat function while the AI is just playing the game.

Just overall it feels like an average human player. I only checked the Zerg replays because I play zerg myself but didnt find anything impressive I or anything outside of normal I wouldn't find watching a random streamer.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
NinjaNight
Profile Joined January 2018
428 Posts
July 24 2019 17:41 GMT
#123
On July 25 2019 02:25 Ronski wrote:
After checking the replays I just feel like its either very disappointing if its actual Alphastar, or that it's just some random NA player laddering on EU.

1 ZVZ, it loses its hatch vs 13/12, then camps on 2 bases building roaches while it sees the opponent taking a 3rd and building 5th/6th gas before the hatch even finishes and never accounts for mutalisks untill it A-moves across the map with a ball of roaches, ignoring all injects from that point forward trying to micro its way to victory. Once it realizes it cant end the game it goes back to injecting but never built any spores / Hydra tech and just loses to mass mutas. Seems like a very "human" way to lose a game.

Edit: After this game I stopped paying too much attention because I was pretty sure its just some random NA player.

2nd ZvZ, it does the "Vibe bronze to GM" Roach speed +1attack build and manages to snipe the opponents 3rd. then it just expands and wins with roach / ravager vs Roach / ravager.

ZvT, loses drones to hellion runby / Liberator harass, then attacks with Roach / ravager / Queen +1 carapace and wins. Builds blind spores at 4mins.

ZvP Just a roach / Ravager ling / Queen all in. Never builds blind spores.

In all the games, it never scouted anything. the camera movement feels very human since its like scrolling the screen by draggin mouse around for small movements and using location hotkeys to jump between hatcheries. Has very poor creep spread, just a single tumor going in 1 or 2 directions and spreading it whenever it remembers to do it.

The bot says GL HF at different times in all the games. in 1 ZvZ it comes at 22seconds, 2nd it comes as a response to the opponent saying gl hf, in ZvT 15 seconds and ZvP 8 seconds into the games. Could be a human controlling the chat function while the AI is just playing the game.

Just overall it feels like an average human player. I only checked the Zerg replays because I play zerg myself but didnt find anything impressive I or anything outside of normal I wouldn't find watching a random streamer.


It's not supposed to be great yet. The deepmind guy (Timo) hanging out in the SC2AI discord said don't expect too much of it right now.
Ronski
Profile Joined February 2011
Finland266 Posts
July 24 2019 17:53 GMT
#124
I'm not disappointed because its playing poorly, just because everything about it feels like a human player doing human mistakes. I would be really surprised if this was the actual alphastar.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
Last Edited: 2019-07-24 18:02:07
July 24 2019 18:01 GMT
#125
When she gets good enough, maybe google will submit her to GSL group stages so we can have some NA representation :p
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
Ronski
Profile Joined February 2011
Finland266 Posts
July 24 2019 18:04 GMT
#126
Also people seem to think its Alphastar because it uses "no control groups", but simply selecting your control groups as (Hidden) in the options menu makes them invisible in the replays as well. So I would say its definitely some random player playing on an alt barcode with hidden hotkeys.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
-HuShang-
Profile Joined December 2012
Canada393 Posts
July 24 2019 19:17 GMT
#127
It's one hundred percent alpha star @ronski
Professional Starcraft 2 Coach & Caster | Message me for more info or business proposals
Ronski
Profile Joined February 2011
Finland266 Posts
July 24 2019 19:37 GMT
#128
On July 25 2019 04:17 -HuShang- wrote:
It's one hundred percent alpha star @ronski


Watching the games themselves I don't see any reason to believe its Alphastar?
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
alexanderzero
Profile Joined June 2008
United States659 Posts
Last Edited: 2019-07-24 20:56:21
July 24 2019 20:55 GMT
#129
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).
I am a tournament organizazer.
Kalera
Profile Joined January 2018
United States338 Posts
Last Edited: 2019-07-25 01:57:24
July 25 2019 01:40 GMT
#130
The Zerg one plays with the least flexibility. You might want to check out the Protoss one.
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
July 25 2019 03:41 GMT
#131
HuShang is casting reps. Seems like its an idiot savant that doesn't know how to pull workers or build anti-air.

https://www.youtube.com/channel/UC_2FZDFti08MI5WRaZUZSIQ
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
Ronski
Profile Joined February 2011
Finland266 Posts
July 25 2019 04:44 GMT
#132
On July 25 2019 05:55 alexanderzero wrote:
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Show nested quote +
Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).


Like I said before, simply choosing (hide control groups) in strarcraft options is enough to achieve this. Everyone who has this option selected appears to be playing without any control groups.

The only proof that it is Alphastar is the 3 accounts all playing at the same time and the same amount of games, analyzing any ingame material simply shows an average player who over prioritizes micro and forgets to macro.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
naughtDE
Profile Blog Joined May 2019
158 Posts
Last Edited: 2019-07-25 05:21:03
July 25 2019 05:04 GMT
#133
On July 25 2019 13:44 Ronski wrote:
Show nested quote +
On July 25 2019 05:55 alexanderzero wrote:
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).


Like I said before, simply choosing (hide control groups) in strarcraft options is enough to achieve this. Everyone who has this option selected appears to be playing without any control groups.

The only proof that it is Alphastar is the 3 accounts all playing at the same time and the same amount of games, analyzing any ingame material simply shows an average player who over prioritizes micro and forgets to macro.


The Zerg one selects Larva without looking or selecting hatcheries first, definite ingame tell. Also slight offscreen micro here and there since what it sees appears to be in an unusual aspect ratio (very obvious when dealing with early game worker shinanigans).

My favourite thing so far was the PvP(random) against Beasty, where Beasty attemps a half-assed cannonrush and Alphastar went lowground Pylon + Gate, which means free win for the canon rushing gm (and beasty immidiatly says GG upon scouting this (as in he is sure of his victory)), what follows is the most insane, clean and calm probe defend I ever seen in Sc2. Like, if a human could develop that level of confidence in it's control, lowground walls in PvP might actually open up as a viable strategy.

Also Alphastar's TvP approach here is entertaining:
It goes Marine, Tank, Speed Banshee.
It gets Stim for Marines first, yet not a single medivac is being build all game.

"I'll take [LET IT SNOW] for 800" - Sean Connery (Darrell Hammond)
fLyiNgDroNe
Profile Joined September 2005
Belgium4001 Posts
July 25 2019 06:20 GMT
#134
can you give a direct link to that pvp vs Beasty?
Drone is a way of living
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
July 25 2019 07:07 GMT
#135


Almost....
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
jimminy_kriket
Profile Blog Joined February 2007
Canada5501 Posts
July 25 2019 08:33 GMT
#136
On July 25 2019 15:20 fLyiNgDroNe wrote:
can you give a direct link to that pvp vs Beasty?

https://www.twitch.tv/videos/457238012?t=05h51m35s
life of lively to live to life of full life thx to shield battery
naughtDE
Profile Blog Joined May 2019
158 Posts
Last Edited: 2019-07-25 13:18:51
July 25 2019 11:48 GMT
#137
On July 25 2019 15:20 fLyiNgDroNe wrote:
can you give a direct link to that pvp vs Beasty?

https://www.twitch.tv/videos/456819625?t=01h58m37s

The one Jimmy posted is also enlightening.

Edit:
I also have a theory, why the protoss version performs best.
If you want to micro marines you have to use up lots of focus and actions (splitting/stutterstepping), but if you want to place a forcefield there is only one action counted. The thing that balances that out for humans is that you also have to concentrate and find the correct moment to do it, if Alphastar just has to worry to stay under a threshold for counted actions, then Protoss without things like injects and creepspread will have the most left over.
"I'll take [LET IT SNOW] for 800" - Sean Connery (Darrell Hammond)
dbRic1203
Profile Joined July 2019
Germany2655 Posts
July 25 2019 19:40 GMT
#138
Beasty also explains the differenz between APM and EAPM in a recent video and how only Alphastar can pull that off, so these Replays are 100% Deepmind Agents.
MaxPax
Ronski
Profile Joined February 2011
Finland266 Posts
July 25 2019 19:44 GMT
#139
Yeah I watched Beastyqt stream analyzing the alphastar replays and I have changed my mind that it is real Alphastar. It still feels very disappointing watching it play zerg doing a blind build without any scouting or real decision making.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
July 26 2019 00:42 GMT
#140
On July 25 2019 05:55 alexanderzero wrote:
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Show nested quote +
Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).

I count one person who did anything close to 'shitting' on alphastar so what do you mean by everyone?

On July 26 2019 04:44 Ronski wrote:
Yeah I watched Beastyqt stream analyzing the alphastar replays and I have changed my mind that it is real Alphastar. It still feels very disappointing watching it play zerg doing a blind build without any scouting or real decision making.

What are you disappointed by?

I'm glad that an AI wasn't able to immediately solve my favourite game even if it is playing at quite a high level in many aspects already.
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
necrosexy
Profile Joined March 2011
451 Posts
July 26 2019 03:03 GMT
#141
did they feed the zerg agent stephano replays?
ROOTFayth
Profile Joined January 2004
Canada3351 Posts
July 26 2019 03:28 GMT
#142
you may want to try and understand how AI works Ronski, it's not magic, it takes time to solve
Equalizer
Profile Joined April 2010
Canada115 Posts
July 26 2019 03:43 GMT
#143
Chances are that they fed it a lot of replay games to get it started which would explain why it may seem to behave like a human. Also because of the complexity of the game even after a massive amount of self play its style can still be heavily biased by its initialization.

Rather than developing more complex play it probably is prone to getting stuck fine tuning a local optima that it was initialized close to.

I think people that are disappointed would be because they were hoping for it to find more unusual but effective strategies. In that respect AlphaGo probably looks more impressive but SC2 should be much harder to achieve that on.
The person who says it cannot be done, should not interrupt the person doing it.
Ronski
Profile Joined February 2011
Finland266 Posts
July 26 2019 05:16 GMT
#144
On July 26 2019 09:42 Fecalfeast wrote:
Show nested quote +
On July 25 2019 05:55 alexanderzero wrote:
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).

I count one person who did anything close to 'shitting' on alphastar so what do you mean by everyone?

Show nested quote +
On July 26 2019 04:44 Ronski wrote:
Yeah I watched Beastyqt stream analyzing the alphastar replays and I have changed my mind that it is real Alphastar. It still feels very disappointing watching it play zerg doing a blind build without any scouting or real decision making.

What are you disappointed by?

I'm glad that an AI wasn't able to immediately solve my favourite game even if it is playing at quite a high level in many aspects already.

On July 26 2019 12:28 ROOTFayth wrote:
you may want to try and understand how AI works Ronski, it's not magic, it takes time to solve


Definitely not an expert about AI learning but watching the protoss agent play it was scouting at least. So expecting the same from Zerg and Terran AI wouldn't be too much.

I just want to see the AI "playing the game", in all aspects of the game that is starcraft, making decisions and reacting to what the opponent is doing. Instead it just does the thing it knows and has no plan B if things go wrong.

I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
PresenceSc2
Profile Joined February 2011
Australia4032 Posts
July 26 2019 05:59 GMT
#145
Can people keep posting their replays here please. I love looking at these.
Stephano//HerO//TaeJa//Squirtle//Bomber
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
Last Edited: 2019-07-26 08:38:27
July 26 2019 08:37 GMT
#146
On July 26 2019 14:16 Ronski wrote:
Show nested quote +
On July 26 2019 09:42 Fecalfeast wrote:
On July 25 2019 05:55 alexanderzero wrote:
This is turning out just like last time. AlphaStar plays godlike, but also human-like Starcraft and everyone shits on it. Oh well, what are you gonna do?

Watching the games themselves I don't see any reason to believe its Alphastar?


The biggest tell is that it doesn't use control groups and also the profiles which are playing all have the exact same number of games played at basically the same time (one profile for each race).

I count one person who did anything close to 'shitting' on alphastar so what do you mean by everyone?

On July 26 2019 04:44 Ronski wrote:
Yeah I watched Beastyqt stream analyzing the alphastar replays and I have changed my mind that it is real Alphastar. It still feels very disappointing watching it play zerg doing a blind build without any scouting or real decision making.

What are you disappointed by?

I'm glad that an AI wasn't able to immediately solve my favourite game even if it is playing at quite a high level in many aspects already.

Show nested quote +
On July 26 2019 12:28 ROOTFayth wrote:
you may want to try and understand how AI works Ronski, it's not magic, it takes time to solve


Definitely not an expert about AI learning but watching the protoss agent play it was scouting at least. So expecting the same from Zerg and Terran AI wouldn't be too much.

I just want to see the AI "playing the game", in all aspects of the game that is starcraft, making decisions and reacting to what the opponent is doing. Instead it just does the thing it knows and has no plan B if things go wrong.


Why would you scout as a Terran? And in the game I saw Deepmind was running through the whole map with a reaper, so it's not like it wasn't scouting at all. But generally, why? It appears Deepmind went for mass tanks marines with a wall(some times questionable ). This holds most of the Protoss builds except the BS builds like proxy tempest. It would be interesting to see what it would do against a proxy tempest, but generally speaking this holds any ground attack as long as you keep a scan for DT(or build some turrets)

If your plan covers most of the early game shenanigans anyway, the scout isn't needed per se. it helps to lower the cost of the defense but generally it's not that much needed.

You must remember that it played maybe millions of games where the most successfull AIs were battling each other. In the end it moves to some builds/units which solve most of the issues so the AI then looks only for something that needs a different reaction. If everythign it knows in first 5 minutes can be solved with "more roaches, drones, ovies" then it will be using this build until somebody shows it - well, not everything is solved with roaches, you know?

The fact it has high winrate without scouting just proves the thought.

+ Show Spoiler +
I am nowhere near high level player but I don't scout in TvP. Why would I? I build marines and tanks which cover everything except proxy tempest. And unless I stumble upon the proxy location I am not gonna scout what exactly is the Protoss proxying anyway (and in TvZ I scout just to know if it's safe to build CC on the lowground) Any good player will beat me because I play very greedy and very, uhm, specific style, but I survive every early game shenanigans with the worker lead(what happens then is usually I royally screw up )
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
Muliphein
Profile Joined July 2019
49 Posts
July 27 2019 09:18 GMT
#147
I have only seen a few games, but I understand why people are a bit disappointed. The AI isn't doing stuff that is insightful to us. At least not on face value. But the main point here is the winrate. It doesn't matter if the AI does stupid stuff, like terran building placement.

This AI is a neural network so that means it doesn't reason. It doesn't string together thoughts. It doesn't do deductive reasoning. Yet it has a very high winrate. Something people only a while ago thought was impossible.

But the AI does actually give insights in the game. First of all, it is an AI that crunches numbers in a cold unpassionate fashion. It doesn't care if it wins ugly. As a human, you want to play out every battle properly and win as decisively as possible. The AI does not. When it sends in it's units and sees the battle is won, it will no longer prioritize microing that battle when there are other things that improve winrate more. We saw this a lot in Go, where the AI started to play 'sub-optimal' when it was in a winning position. A neural network playing strangely when it is already won is perfectly understandable given the nature of the AI. It means that either the nodes aren't weighted properly for winning a won game harder and faster. Using your weights for these scenarios would limit the number of options you have to get proper weights for nodes that are more important to the outcome of the game.

But this AI definitely indicates some things where humans may be completely wrong. Like scouting. People keep saying that scouting is really important. But the AI seems to disagree. This means that in the millions of AI vs Ai games, scouting doesn't increase AI winrate. I think this is not because the deep neural network is incapable of getting an architecture where seeing an opponent's tech tree will completely change the units being build. We do see the Protoss AI do this.

Apparently the best way to play the game, for the AI, is to use a build that plays well vs most enemy builds and perfect that.

Personally, I think humans are too obsessed by what the meta is, by mindgames, by scouting, by guessing what their opponent is doing. You really see fashion trends in which builds are popular. And people seem to suggest you are better off playing fashionable builds. But game theory wise you should actually play out of fashion builds. So this whole system humans made up around builds and coming up with new ones and beating the meta is completely artificial. It is humans adding a layer on top of the game because by their neurology they are forced to do so.

An AI will be free of this limitation.

Now I think that an AI will have a huge advantage in not getting tired, playing consistently, not being emotional, having an iron concentration.

I think the main achievement is that they can get neural networks to converge so they are able to play this game at an extremely high level. That their play is filled with mistakes and questionable behavior only shows how much more improvement would still be possible. For example, building placement for terran is a problem that neural networks just cannot handle. It doesn't generalize. I know it is easier in SC2, but in SC BW, no one really can reason which walls are good walls. You have to use trial and error. There is no obvious general rule the human brain was able to pick up on. Same with protoss PvZ building placement. You memorize it for each starting location in each map. The AI will have to do the same thing.

Now that they have a core neural network that works really well, they can try to add layers, either neural networks or hardcoded or other machine learning methods, to guide the neural network. I know Deepmind likes to have a single neural net be able to do unguided learning. So I think they will try that instead. This is why they went from their hybrid AlphaGo to their more pure AlphaZero.

I also think people underestimate the decision making required in deciding if a battle can be won, and then winning it the way it saw how it was winnable. Especially when you also consider that the solution was to be convergable. I think this is the main problem to solve in RTS games; when do you engage a battle and when do you avoid. Making as many units as possible and deciding which units to build are kind of trivial decisions to that.

I think that as more progress is made, we will see AI that we would have considered as 'eerie the same way we have in Go or Chess. And one of the reasons these AIs don't look that strong is because they need to play with human limitations. The one that beat Mana and TLO did not. But I agree that right now, we are not there yet. But people seem to be missing the point that perfect SC2 looks very different from what people tried to achieve.
Ronski
Profile Joined February 2011
Finland266 Posts
July 27 2019 10:18 GMT
#148
On July 27 2019 18:18 Muliphein wrote:
I have only seen a few games, but I understand why people are a bit disappointed. The AI isn't doing stuff that is insightful to us. At least not on face value. But the main point here is the winrate. It doesn't matter if the AI does stupid stuff, like terran building placement.

This AI is a neural network so that means it doesn't reason. It doesn't string together thoughts. It doesn't do deductive reasoning. Yet it has a very high winrate. Something people only a while ago thought was impossible.

But the AI does actually give insights in the game. First of all, it is an AI that crunches numbers in a cold unpassionate fashion. It doesn't care if it wins ugly. As a human, you want to play out every battle properly and win as decisively as possible. The AI does not. When it sends in it's units and sees the battle is won, it will no longer prioritize microing that battle when there are other things that improve winrate more. We saw this a lot in Go, where the AI started to play 'sub-optimal' when it was in a winning position. A neural network playing strangely when it is already won is perfectly understandable given the nature of the AI. It means that either the nodes aren't weighted properly for winning a won game harder and faster. Using your weights for these scenarios would limit the number of options you have to get proper weights for nodes that are more important to the outcome of the game.

But this AI definitely indicates some things where humans may be completely wrong. Like scouting. People keep saying that scouting is really important. But the AI seems to disagree. This means that in the millions of AI vs Ai games, scouting doesn't increase AI winrate. I think this is not because the deep neural network is incapable of getting an architecture where seeing an opponent's tech tree will completely change the units being build. We do see the Protoss AI do this.

Apparently the best way to play the game, for the AI, is to use a build that plays well vs most enemy builds and perfect that.

Personally, I think humans are too obsessed by what the meta is, by mindgames, by scouting, by guessing what their opponent is doing. You really see fashion trends in which builds are popular. And people seem to suggest you are better off playing fashionable builds. But game theory wise you should actually play out of fashion builds. So this whole system humans made up around builds and coming up with new ones and beating the meta is completely artificial. It is humans adding a layer on top of the game because by their neurology they are forced to do so.

An AI will be free of this limitation.

Now I think that an AI will have a huge advantage in not getting tired, playing consistently, not being emotional, having an iron concentration.

I think the main achievement is that they can get neural networks to converge so they are able to play this game at an extremely high level. That their play is filled with mistakes and questionable behavior only shows how much more improvement would still be possible. For example, building placement for terran is a problem that neural networks just cannot handle. It doesn't generalize. I know it is easier in SC2, but in SC BW, no one really can reason which walls are good walls. You have to use trial and error. There is no obvious general rule the human brain was able to pick up on. Same with protoss PvZ building placement. You memorize it for each starting location in each map. The AI will have to do the same thing.

Now that they have a core neural network that works really well, they can try to add layers, either neural networks or hardcoded or other machine learning methods, to guide the neural network. I know Deepmind likes to have a single neural net be able to do unguided learning. So I think they will try that instead. This is why they went from their hybrid AlphaGo to their more pure AlphaZero.

I also think people underestimate the decision making required in deciding if a battle can be won, and then winning it the way it saw how it was winnable. Especially when you also consider that the solution was to be convergable. I think this is the main problem to solve in RTS games; when do you engage a battle and when do you avoid. Making as many units as possible and deciding which units to build are kind of trivial decisions to that.

I think that as more progress is made, we will see AI that we would have considered as 'eerie the same way we have in Go or Chess. And one of the reasons these AIs don't look that strong is because they need to play with human limitations. The one that beat Mana and TLO did not. But I agree that right now, we are not there yet. But people seem to be missing the point that perfect SC2 looks very different from what people tried to achieve.


I think its completely possible that the AI will also just hit an iron wall at 6k MMR where it can no longer win because of its limited scouting. Hoping that is the case
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
Kenny_mk1
Profile Joined November 2016
31 Posts
July 27 2019 10:58 GMT
#149
On July 27 2019 18:18 Muliphein wrote:
I have only seen a few games, but I understand why people are a bit disappointed. The AI isn't doing stuff that is insightful to us. At least not on face value. But the main point here is the winrate. It doesn't matter if the AI does stupid stuff, like terran building placement.

This AI is a neural network so that means it doesn't reason. It doesn't string together thoughts. It doesn't do deductive reasoning. Yet it has a very high winrate. Something people only a while ago thought was impossible.

But the AI does actually give insights in the game. First of all, it is an AI that crunches numbers in a cold unpassionate fashion. It doesn't care if it wins ugly. As a human, you want to play out every battle properly and win as decisively as possible. The AI does not. When it sends in it's units and sees the battle is won, it will no longer prioritize microing that battle when there are other things that improve winrate more. We saw this a lot in Go, where the AI started to play 'sub-optimal' when it was in a winning position. A neural network playing strangely when it is already won is perfectly understandable given the nature of the AI. It means that either the nodes aren't weighted properly for winning a won game harder and faster. Using your weights for these scenarios would limit the number of options you have to get proper weights for nodes that are more important to the outcome of the game.

But this AI definitely indicates some things where humans may be completely wrong. Like scouting. People keep saying that scouting is really important. But the AI seems to disagree. This means that in the millions of AI vs Ai games, scouting doesn't increase AI winrate. I think this is not because the deep neural network is incapable of getting an architecture where seeing an opponent's tech tree will completely change the units being build. We do see the Protoss AI do this.

Apparently the best way to play the game, for the AI, is to use a build that plays well vs most enemy builds and perfect that.

Personally, I think humans are too obsessed by what the meta is, by mindgames, by scouting, by guessing what their opponent is doing. You really see fashion trends in which builds are popular. And people seem to suggest you are better off playing fashionable builds. But game theory wise you should actually play out of fashion builds. So this whole system humans made up around builds and coming up with new ones and beating the meta is completely artificial. It is humans adding a layer on top of the game because by their neurology they are forced to do so.

An AI will be free of this limitation.

Now I think that an AI will have a huge advantage in not getting tired, playing consistently, not being emotional, having an iron concentration.

I think the main achievement is that they can get neural networks to converge so they are able to play this game at an extremely high level. That their play is filled with mistakes and questionable behavior only shows how much more improvement would still be possible. For example, building placement for terran is a problem that neural networks just cannot handle. It doesn't generalize. I know it is easier in SC2, but in SC BW, no one really can reason which walls are good walls. You have to use trial and error. There is no obvious general rule the human brain was able to pick up on. Same with protoss PvZ building placement. You memorize it for each starting location in each map. The AI will have to do the same thing.

Now that they have a core neural network that works really well, they can try to add layers, either neural networks or hardcoded or other machine learning methods, to guide the neural network. I know Deepmind likes to have a single neural net be able to do unguided learning. So I think they will try that instead. This is why they went from their hybrid AlphaGo to their more pure AlphaZero.

I also think people underestimate the decision making required in deciding if a battle can be won, and then winning it the way it saw how it was winnable. Especially when you also consider that the solution was to be convergable. I think this is the main problem to solve in RTS games; when do you engage a battle and when do you avoid. Making as many units as possible and deciding which units to build are kind of trivial decisions to that.

I think that as more progress is made, we will see AI that we would have considered as 'eerie the same way we have in Go or Chess. And one of the reasons these AIs don't look that strong is because they need to play with human limitations. The one that beat Mana and TLO did not. But I agree that right now, we are not there yet. But people seem to be missing the point that perfect SC2 looks very different from what people tried to achieve.



Humans doesnt do scout meta and build because of neurology, but because they determined those were the fastest to achieve timing attack with certains upgrades, and that having the earliest composition with those criteria allow them to do well, unless the opponnents bo exploit a weakness in the bo. But some bo shouldnt have, and are designed to react against cheese (16 marine drop i guess?) Also without scouting you might get good stats on a ladder, but on a offline tourney its another story. Nevertheless good points on the rest of the post
MrFreeman
Profile Joined January 2015
207 Posts
July 27 2019 16:49 GMT
#150
A bit disappointing, as a lot of ML approaches can find unexpected solutions, instead, these agents just found best working strategies in the beta and don't even adjust much to what is happening, e.g. building roaches against Mutas and Voids, building Banshees and Tanks against Ravens and building Phoenix against Stalkers.
Still, it is very impressive how solid the gameplay is for a bot.
Jan1997
Profile Blog Joined April 2013
Norway671 Posts
July 27 2019 17:27 GMT
#151
Might as well shoot in that in order for the bot to be "perfect" it needs to take into account that certain players have certain styles that they stick to. Like, if you que into player Z and you know player Z is an incredibly cheesy player so you adjust to that. Or you que into player Y and you know player Y does a number of rotational builds based off off long macro games etc.

This only works in GM though, below that you don't really see the same players over and over again.
Do something today that your future self will be thankful for.
Kalera
Profile Joined January 2018
United States338 Posts
July 27 2019 18:12 GMT
#152
I suspect the AI would turn out very different if it retained metagame knowledge against its opponents. The way it learns right now, there's no disadvantage to using the same strategy all the time.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-27 19:51:28
July 27 2019 19:41 GMT
#153
On July 27 2019 19:58 Kenny_mk1 wrote:
Humans doesnt do scout meta and build because of neurology,


Everything humans do is because of their neurology. And no, I am not equivicating now. The point is that the AI doesn't have it. It just crunches millions of huge arrays of data until it finds an array that apparently wins a lot, for whatever reason.

...but because they determined those were the fastest to achieve timing attack with certains upgrades, and that having the earliest composition with those criteria allow them to do well, unless the opponnents bo exploit a weakness in the bo. But some bo shouldnt have, and are designed to react against cheese (16 marine drop i guess?) Also without scouting you might get good stats on a ladder, but on a offline tourney its another story. Nevertheless good points on the rest of the post


But you are also playing against a human. When humans play against humans they cannot ignore neurology/psychology/mind games/meta because their opponent isn't and it is questionable if a human could even play that way.

No human would have the approach to play one build vs everyone always and fine tune it. Humans are obsessed with strategy. We saw this when people debated automation in SC2. We see it in how there is even a meta. Having a meta is irrational in itself. Having fashionable builds is a property of the community, not of the game. Deepmind's work so far may be showing us that all this energy human players put into builds and strategies and mindgames could be a waste of time and effort.

This is perfectly shown in the comment made saying that "for an AI to play perfect, it needs to take into consideration the style of the player." Why? The AI itself has no style. Why would it be a good thing for an AI to learn and study the style of player? This is where humans go wrong all the time, tricking themselves and overthinking things, leading to the wrong decision. Why do you want the AI to copy that? Maybe if you had perfect information about all games all their human players ever played. But that is not available. And the computational cost associated with that compared to the gain is clearly not worth it. The strength of the AI is that exactly it doesn't try to engage in mindgames and use generally subpar play because it thinks it will be superior to this specific player. There are a lot of cases where making a mistake can help you win quicker, because your opponent is making worse mistakes. But it is folly to try to make an AI that makes marginal plays based on a calculation that they think their opponent won't be able to exploit it. Maybe this is necessary in poker. There is a poker AI now that beats top players. But RTS is not poker and there is no reason for AI researchers to program their AI to make weak plays that could lead to quicker victories vs weak players.

You may not like the way these AIs play. But they do show every very well what is most crucial to winning games.

As for whether this AI approach has hit an iron wall, I think these bots must be converged to their maximum MMR in their internal way of measuring it. Otherwise, they wouldn't put them out there. And they are not trying to see where the strong points and weak points of their AIs are so they know how to change the neural net architecture to get better convergence. But there is no fundamental limitation in AI about why an AI cannot learn building placement, cannot learn to build hydras or mutas vs warp rays, etc etc.

Yes, a neural network has inherent limitations. It is impossible to solve many different problems with the same neural net perfectly. If you decide to go the neural net approach you accept that there will be gaps in the generalizations the network makes, and you accept these gaps because of how good the net is in the general cases. But there is no reason why you cannot have a hybrid approach.

In theory, you could have 10 000 neural networks, all trained to play the game at a very high level, but all quite different. And for every game state, the AI can play out the game for the next 5 seconds (of the entire game if you have infinite computation) using every neural network vs every other neural network. And then pick the best neural net from those simulations and use those actions vs the actual human.

It should also be possible to course grain the game and have a quick approximation about the outcome of the game. It is clear that the terran AI has a problem vs air units. It would be easy to very quickly play out the game until the end using the current game state.

In fact, the neural network doesn't even do this. It doesn't keep track of how many units the enemy has explicitly. It also doesn't try to figure out if it has to defend or attack explicitly. This could in principle be programmed in. Deepmind is probably not interested in this, as they want to figure out how to develop AI that can learn unassisted without prior knowledge and without human intervention. But AlphaGo that beat Lee Sedol had three different neural networks, not one, doing different things.




If you are disappointed that the AI plays 1 build and doesn't scout while winning a lot, you are not disappointed in the AI. It does the only thing it cares for; winning. You are actually disappointed in the nature of SC2 because the AI shows the optimal way to play is not to scout and not to strategize.
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 27 2019 21:57 GMT
#154
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)


NinjaNight
Profile Joined January 2018
428 Posts
July 27 2019 22:07 GMT
#155
On July 27 2019 18:18 Muliphein wrote:


But this AI definitely indicates some things where humans may be completely wrong. Like scouting. People keep saying that scouting is really important. But the AI seems to disagree. This means that in the millions of AI vs Ai games, scouting doesn't increase AI winrate. I think this is not because the deep neural network is incapable of getting an architecture where seeing an opponent's tech tree will completely change the units being build. We do see the Protoss AI do this.



? You mentioned yourself it doesn't reason or deduce because it's a number crunching neural network. So naturally it's not going to be able to take advantage of scouting which requires high level reasoning to be useful. Of course scouting is not going to increase its winrate.

It's also still far below pro level and it still has very little intelligence and mostly relies on efficient mechanics. It's not telling us anything yet about how Starcraft should be played.
Xain0n
Profile Joined November 2018
Italy3963 Posts
July 27 2019 23:59 GMT
#156
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.
Kenny_mk1
Profile Joined November 2016
31 Posts
July 28 2019 00:30 GMT
#157
Many players do he same build order over and over again, mostly on brood war, and finer tune it. Mostly on brood war too there was poker aspect with cc first, but on some build orders it just changed the timings at which time things was done.

The meta in which cc first emerged was such, a meta because everyone played safe so it was low risk to do cc first. Launch a reinforcement ia in a ladder where all players open fe, it should open cc first because its what is best.

Not studying what you expect from a player is just a part if chance to win you ignore, and is also what led to superb game on GSL. Also when the ia will get better it will probably be the best chance for human player to defeat alphastar : to abuse his weakness. If alphastar could ingest pro replay to learn it will know trap make good uses of oracles and wont need to scout to make defenses for example (why not? Doesnt it play thousand of game per hour? But very very different approach i guess)

Flash fine tuned his 5rax +1 with mech transition and won over 2 years with that ( mb more) Then he fucked up his wrists and fine tuned 1 1 1to do a less intensive build. Every game was different but what was strong was the variation he could done. TvT lotv is pretty much the raven build those days.

Sorry was on phone post isnt organized
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 28 2019 00:30 GMT
#158
On July 28 2019 08:59 Xain0n wrote:
Show nested quote +
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.


I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games.

In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously.

In those genres, the goal was world dominating AI.

They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match.

They gave us a glimpse of True AI at the start.

In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also.

I love the project but am not convinced or in love with their approach to sc2.

In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer.

It seems they are less focused on total domination in this genre.

I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI....

Been waiting since 99. Still waiting.

Inrau
Profile Joined June 2019
35 Posts
July 28 2019 00:42 GMT
#159
The agents winrates would plummet if players saw ALPHASTAR and players had a few weeks constantly playing.
terribleplayer1
Profile Joined July 2018
95 Posts
Last Edited: 2019-07-28 03:28:19
July 28 2019 03:26 GMT
#160
On July 28 2019 09:42 Inrau wrote:
The agents winrates would plummet if players saw ALPHASTAR and players had a few weeks constantly playing.


Yea, its being helped a ton by the barcode, and the insane mechanics, I can see it losing a ton of games to master leaguers if it does the same build time and time again, which is what it seems to be doing.

I think each agent just hones in into one specific playstyle and sticks with it, if players know which agent they're playing they're going to have insane winrates vs it, despite it's micro/macro advantage, unless an agent can actually learn to play the game rather than just do a generic strong build, learn when to scout, what to scout, and how to react. Otherwise it's only chance is to randomize which agent is playing and that is still probably not going to be enough to face the likes of Serral/Maru.

The APM limitations gotta stay, otherwise it's just not interesting at all, it simply breaks the game.
skdsk
Profile Joined February 2019
138 Posts
Last Edited: 2019-07-28 09:03:03
July 28 2019 09:01 GMT
#161
On July 28 2019 09:30 AttackZerg wrote:
Show nested quote +
On July 28 2019 08:59 Xain0n wrote:
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.


I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games.

In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously.

In those genres, the goal was world dominating AI.

They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match.

They gave us a glimpse of True AI at the start.

In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also.

I love the project but am not convinced or in love with their approach to sc2.

In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer.

It seems they are less focused on total domination in this genre.

I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI....

Been waiting since 99. Still waiting.


Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm.

What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics.
Acrofales
Profile Joined August 2010
Spain17979 Posts
Last Edited: 2019-07-28 09:55:47
July 28 2019 09:50 GMT
#162
On July 28 2019 09:30 AttackZerg wrote:
Show nested quote +
On July 28 2019 08:59 Xain0n wrote:
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.


I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games.

In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously.

In those genres, the goal was world dominating AI.

They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match.

They gave us a glimpse of True AI at the start.

In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also.

I love the project but am not convinced or in love with their approach to sc2.

In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer.

It seems they are less focused on total domination in this genre.

I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI....

Been waiting since 99. Still waiting.



I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.

Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".

You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.



Muliphein
Profile Joined July 2019
49 Posts
July 28 2019 10:27 GMT
#163
The ability to play out an endgame which even top masters know who should be winning is called 'mechanics' in chess. And AI, either traditional or deep neural networks, are really good at this.

Calculation is also kind of the analogue of (e)APM.

The ability to micro a battle is a difficult AI problem. Just ask the people programming AI in BW using BAPI. What they do is actually simulate the outcome of a fight, and then decide if your side is winning. And if your side is winning, you keep attacking. This is not a very good solution. But it is a really hard AI problem to crack.

Problem-wise, having an AI build corruptors when the protoss has a big air army is not a hard AI problem.

I understand that the way these AIs play is not optimal in the phase space of all possible plays. What I mean that it is the optimal solution found when converging the matrix of weights. It is obvious that it is converging to weights that win a lot of games. And it converged to these weights and not to others. So in that sense, this is the optimum machine learning finds.

In the phase space of all playstyles, some playstyles will be islands of good playstyles surrounded by a vast sea of bad playstyles. Machine learning has a huge difficulty finding these islands, because when it is in the sea there is no reason to assume these islands exist, let alone where exactly they are. So it cannot use a gradient descent-type algorithm to converge on them.

An AI by definition is a tank playing rugby, a F1 car doing a 400m sprint, a supercomputer playing chess. The challenge is in building the tank, building the F1 car, building the AI. Given enough time, technology, and resources, any machine is invincible by definition. This is already a known fact.

If you don't understand this, it is not a matter of agreeing or disagreeing, listen or read Kasparov on Deep Blue and computer chess.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 10:42:39
July 28 2019 10:42 GMT
#164
On July 28 2019 07:07 NinjaNight wrote:
? You mentioned yourself it doesn't reason or deduce because it's a number crunching neural network. So naturally it's not going to be able to take advantage of scouting which requires high level reasoning to be useful. Of course scouting is not going to increase its winrate.


This is not entirely true. They trained the neural net to first copy top human players. Most of them send out a worker. So when an initial randomly generated neural net randomly sends out a worker onto the map, it more closely resembles the replays it is trying to copy, and this neural net will be selected and the algorithm will adjust the weights that caused this desired trait even more in the same direction. But when the AI sends out two workers, it is not matching the replays as well as when it sends out one worker. And when the worker reaches the enemy base, it also more accurately matches the replay.

Now in PvP, when DTs are build you are going to lose if you have no detection. The AI doesn't know that the dark citadel means that there are going to be DTs. But neural nets that build observers upon seeing the dark citadel are selected for. The AI doesn't know why, but still it happens and the AI ends up making observers.

So now we have a scouting AI that builds observers when its probe sees the dark citadel. But what the AI does not do is think about if its goal was achieved. With a given game state it crunches the numbers and this leads to a certain input. The AI will not scout until it has determined what the opponent is doing, which is what a human would do.

You can create an AI that tries to narrow down exactly the build of the opponent, guesses exactly the amount of units its opponent has, and will guess the future tech tree and unit composition. But that requires you to artificially force the AI to do this.

What we have learned now is that machine learning does not favour scouting. It not scouting really meant you will definitely lose, all AIs would continuously scout, because they have learned this behavior by copying replays. They do not. Yes, this says something about the limitations and nature of the machine learning they are using. But it also says something about the game.


It's also still far below pro level and it still has very little intelligence and mostly relies on efficient mechanics. It's not telling us anything yet about how Starcraft should be played.


Yes it is. It is telling us that strategy and mindgames are not important and that macro, mechanics, deciding when and where to fight, and micro are the quality that decides if you win or lose.
terribleplayer1
Profile Joined July 2018
95 Posts
July 28 2019 11:38 GMT
#165
On July 28 2019 19:42 Muliphein wrote:
Show nested quote +
On July 28 2019 07:07 NinjaNight wrote:
? You mentioned yourself it doesn't reason or deduce because it's a number crunching neural network. So naturally it's not going to be able to take advantage of scouting which requires high level reasoning to be useful. Of course scouting is not going to increase its winrate.


....

What we have learned now is that machine learning does not favour scouting. It not scouting really meant you will definitely lose, all AIs would continuously scout, because they have learned this behavior by copying replays. They do not. Yes, this says something about the limitations and nature of the machine learning they are using. But it also says something about the game.

Show nested quote +

It's also still far below pro level and it still has very little intelligence and mostly relies on efficient mechanics. It's not telling us anything yet about how Starcraft should be played.


Yes it is. It is telling us that strategy and mindgames are not important and that macro, mechanics, deciding when and where to fight, and micro are the quality that decides if you win or lose.



How are you going to mindgame it when you're playing a bo1 on the ladder? People don't know theyre playing against a monster who can only macro/micro and not think.


...if players know which agent they're playing they're going to have insane winrates vs it, despite it's micro/macro advantage, unless an agent can actually learn to play the game rather than just do a generic strong build, learn when to scout, what to scout, and how to react. Otherwise it's only chance is to randomize which agent is playing and that is still probably not going to be enough to face the likes of Serral/Maru....
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 28 2019 11:38 GMT
#166
On July 28 2019 18:01 skdsk wrote:
Show nested quote +
On July 28 2019 09:30 AttackZerg wrote:
On July 28 2019 08:59 Xain0n wrote:
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.


I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games.

In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously.

In those genres, the goal was world dominating AI.

They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match.

They gave us a glimpse of True AI at the start.

In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also.

I love the project but am not convinced or in love with their approach to sc2.

In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer.

It seems they are less focused on total domination in this genre.

I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI....

Been waiting since 99. Still waiting.


Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm.

What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics.


Maybe you are right and without limiting the approach, you get a billion bots that have no sense of logic or game play and they never progress strategically.

As I said in an earlier post, I am not an expert in this field (or any) and maybe I'm dreaming of seeing something that is not possible or coming in the near future.

So far micro has been the only impressive thing I have seen... Is our game just that complicated that a billion games and a neural net still can't get past micro and the rock, paper, scissor of build orders? Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp?
Cyro
Profile Blog Joined June 2011
United Kingdom20285 Posts
July 28 2019 11:48 GMT
#167
Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp?


This mindset is asserting that the "human hard counters" are better than just building a bunch of stalkers and killing the person trying to do X strategy which is not necessarily true.
"oh my god my overclock... I got a single WHEA error on the 23rd hour, 9 minutes" -Belial88
Ronski
Profile Joined February 2011
Finland266 Posts
Last Edited: 2019-07-28 12:11:41
July 28 2019 12:10 GMT
#168
youtu.be

A game between Reynor (offracing T) and Alphastar Zerg.

Pretty funny game.
I am a tank. I am covered head to toe in solid plate mail. I carry a block of metal the size of a 4 door sedan to hide behind. If you see me running - you should too.
ROOTFayth
Profile Joined January 2004
Canada3351 Posts
July 28 2019 12:46 GMT
#169
so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet
ROOTFayth
Profile Joined January 2004
Canada3351 Posts
July 28 2019 12:47 GMT
#170
On July 28 2019 20:38 AttackZerg wrote:
Show nested quote +
On July 28 2019 18:01 skdsk wrote:
On July 28 2019 09:30 AttackZerg wrote:
On July 28 2019 08:59 Xain0n wrote:
On July 28 2019 06:57 AttackZerg wrote:
I am really enjoying the thoughtful posts here. Thanks for the read.

I will disagree with the poster above me, the last statement.

Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.

It is unique but.... it is far from the stage where we can make that claim.

Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.

The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.

I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.


Alphago and alphazero were not required to lower their skillsets to match the playerbase.

I think they had catered too much to us and have thus moved away from their most historic achievements.

First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.

First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.

I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.

No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.

God mode full speed zerg please. 8)

(I am thankful for this great project and the people behind it. Cheers)




That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations).

I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once.


I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games.

In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously.

In those genres, the goal was world dominating AI.

They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match.

They gave us a glimpse of True AI at the start.

In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also.

I love the project but am not convinced or in love with their approach to sc2.

In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer.

It seems they are less focused on total domination in this genre.

I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI....

Been waiting since 99. Still waiting.


Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm.

What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics.


Maybe you are right and without limiting the approach, you get a billion bots that have no sense of logic or game play and they never progress strategically.

As I said in an earlier post, I am not an expert in this field (or any) and maybe I'm dreaming of seeing something that is not possible or coming in the near future.

So far micro has been the only impressive thing I have seen... Is our game just that complicated that a billion games and a neural net still can't get past micro and the rock, paper, scissor of build orders? Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp?

I'm pretty sure in a game such a starcraft 2 the AI is going to use the build that has the highest winrate at the highest frequency but you can't do anything about the rock paper scissor of build orders, there is some variance in this game and you have to embrace it, it makes it fun actually
Goolpsy
Profile Joined November 2010
Denmark301 Posts
July 28 2019 13:28 GMT
#171
The purpose of the AI research is not to "beat humans" or "accomplish task X". We've done that..

It's essentially to be able to make a self-learning AI that can "solve problems".
Why Self-learning? --> Because there are problems we humans don't even understand (or haveexperience in yet), and we'd hope the AI would be able to solve it.
(I am not talking about SKYNET here).

The problem is always, that you need something to measure your product against. How good is it? When should we stop? How much power is required to train it?
Chess and Go were good challenges, because the games themselves are simple with perfect information; AND humans are amazing at them. At the same time, the game have enough possible variations that it is not solvable by a brute-force approach.

Imagine using AI's for selfdriving cars; Driving is easy. But what if a moron drives too close to you? or the car in front drives in the middle of the road? or in your side of the road? What if a deer runs in front of your car. What if you get hit by a bird.
Imagine you get hit by a bird and the AI goes like this: "uhuh weird sensor reading, unexpected error, abort abort .. " and drives off the road.
or you program it to ignore such, and it hits a person: "weird sensor reading.. oh well nevermind"

Back to start Starcraft AI; Humans are really good at starcraft, Obviously macro and micro helps a lot, but our main strength is being able to solve problems (or attempt to solve problems).
Starcraft is a complex game with imperfect information and many many many problems to continually solve.

This is why it is interesting to test the AI's against humans in this area. We are sufficiently good at the game, to be worth competing against (for problems solving skills). Here we have a measure of "how good did we actually become".
Winning with 1500eAPM stalker micro is not solving problems.
Figuring out what to do against an opponent who stalker rushes you and then goes "mass void rays" IS.

I think much of the "disappointment" many are feeling, is not that the AI is unbeatable.. but that they're so EASILY "abusable".
It doesnt understand what AIR is, or where it is (bile drops).
It doesnt understand that turrets can have upgraded range
It doesnt know what a Widowmine is - even when its visible

As for worker scouting; it is not necesarily important (if you are doing aggressive strategies, you are getting information all the time and you can infer A LOT from it).
But not scouting at all and doing a blind build and not adapting to what it eventually sees, is not problem solving :/

It is "funny" however, because we humans think scouting is the EASIEST way to solve the problem of "what to build" and "when to build",
so it's amazing that the AI is still so 'dumb' and 'unrefined' and still doesn't use this "easy" way of overcoming that obstacle.
Goolpsy
Profile Joined November 2010
Denmark301 Posts
July 28 2019 13:35 GMT
#172
On a different note;
the AI is playing 9 different match ups. (+random?)

And they likely have several 'Agents' for each matchup (lets imagine they are testing 5 different ones)

That's 54 games just to play 1 game in each matchup for each agent.

They might even cycle Agents on the same account. So if you played a PvP against the AI, the next PvP might actually be a different Agent/Net.

We don't know much of anything :D
Slydie
Profile Joined August 2013
1913 Posts
July 28 2019 13:37 GMT
#173
On July 28 2019 21:46 ROOTFayth wrote:
so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet


I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated!

The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.
Buff the siegetank
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 28 2019 13:41 GMT
#174
On July 28 2019 19:27 Muliphein wrote:
The ability to play out an endgame which even top masters know who should be winning is called 'mechanics' in chess. And AI, either traditional or deep neural networks, are really good at this.

Calculation is also kind of the analogue of (e)APM.

The ability to micro a battle is a difficult AI problem. Just ask the people programming AI in BW using BAPI. What they do is actually simulate the outcome of a fight, and then decide if your side is winning. And if your side is winning, you keep attacking. This is not a very good solution. But it is a really hard AI problem to crack.

Problem-wise, having an AI build corruptors when the protoss has a big air army is not a hard AI problem.

I understand that the way these AIs play is not optimal in the phase space of all possible plays. What I mean that it is the optimal solution found when converging the matrix of weights. It is obvious that it is converging to weights that win a lot of games. And it converged to these weights and not to others. So in that sense, this is the optimum machine learning finds.

In the phase space of all playstyles, some playstyles will be islands of good playstyles surrounded by a vast sea of bad playstyles. Machine learning has a huge difficulty finding these islands, because when it is in the sea there is no reason to assume these islands exist, let alone where exactly they are. So it cannot use a gradient descent-type algorithm to converge on them.

An AI by definition is a tank playing rugby, a F1 car doing a 400m sprint, a supercomputer playing chess. The challenge is in building the tank, building the F1 car, building the AI. Given enough time, technology, and resources, any machine is invincible by definition. This is already a known fact.

If you don't understand this, it is not a matter of agreeing or disagreeing, listen or read Kasparov on Deep Blue and computer chess.

This is post-hoc reasoning, though. Of course the analogy between an F1 car "sprinting" with an AI playing chess are analogous, but only because we have built an AI that is good at chess. We've known for hundreds, if not thousands, of years that we can build machines that do certain tasks better than humans, e.g. watermills and windmills do a much better job at grinding wheat than humans do. That machine superiority in many domains started to encroach on one of the areas humans were undoubtedly superior in the 1940s: intelligence.

In the 1940s, we built the first machines that could compute faster than humans can (I guess the real credit here might even go to Babbage, but it's not really til the 1940s that these machines undisputably beat humans, and not until the 1960s until they were generally programmable for a large array of computation tasks. Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times, as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task. But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.

At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.
Akio
Profile Blog Joined January 2019
Finland1838 Posts
July 28 2019 13:51 GMT
#175
This is super interesting. Having seen some replays of how it plays on the higher levels, it seems to have improved a bunch in terms of not being just a micro bot like last time
Mine gas, build tanks.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
July 28 2019 13:52 GMT
#176
On July 28 2019 22:37 Slydie wrote:
Show nested quote +
On July 28 2019 21:46 ROOTFayth wrote:
so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet


I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated!

The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.


It had to be pre-trained because the action space is far too large for pure reinforcement learning (with our current abilities). It could go a million games without learning a reasonable response to an event because there are too many responses to try.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 18:42:48
July 28 2019 18:19 GMT
#177
On July 28 2019 18:50 Acrofales wrote:
I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.

Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".

You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.


We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask.

Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point.

The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times,


I don't think this is an accurate account of the consensus, if there was any, at that time. Decades ago, it was actually a minority that correctly recognized that the brain is a machine like any other. And that in principle a machine could be build that does the same thing as a brain, only better. Respectable scientists for a long time placed the brain outside of any biological context. General principles of biology were not applied to it. Only with the rise of cognitive science did this change.

But you are right that for the last decades it was just an issue of actually building a machine, because it proved to be quite challenging. Yes, it is true in some sense that just raw calculation wouldn't be enough. But it is very easy to calculate the phase space of Go and to then see that raw calculation was never going to solve that. And we have known for a long time that humans use pattern recognition properties of a neural network to play these games so well.

In fact, the opposite is true as people thought chess and go would be 'safe' from computers for a decade or two more than they actually were.

...as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task.


This is besides the point, but I beg to differ. Doing complex tasks is quite challenging for robots. It would be extremely challenging to build a robot that a human top rugby player could control using some VR interface (like in Avatar) that would allow for a similar level of play as the actual rugby player playing himself. We are decades off from that. But you were actually trying to make another point. So be careful with your language.



But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.


So which one is it? Did we take way longer to solve these games? Or did we do it earlier than expected?


At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.


Perfect micro is an AI challenge. Not a 'how fast can I issue commands through an embedded systems interface'-challenge. That it is not the AI challenge most people are interested in, for the simple reason that it learns human players nothing new about the game, is besides the point.

It may be the case that in SC2, unlike in chess and go, an AI can play way way above the best humans without doing anything that humans hadn't realized or discovered themselves.

This all comes back to one important point. RTS games are games of execution and small scale decision making(tactics). They are not games of strategy. And their complexity is quite basis. There aren't layers upon layers that reshape how the game is played as you ascend the skill curve. Yes, the move space is huge and sparse, but in essence it is a straightforward game. Build an army stronger than your opponent, then force a fight and win the game. That's the entire game in a nutshell.


On July 28 2019 22:37 Slydie wrote:
Show nested quote +
On July 28 2019 21:46 ROOTFayth wrote:
so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet


I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated!

The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.



This is not quite correct. Yes, they had to train the AI to copy human replays first. But once they trained the AI to win rather than copy, the phase space was still equally large. The issue is not that it is large. It is large. A large phase space makes it more difficult but isn't the main property to be worried about. The landscape of the phase space is that matters. If you are placed randomly in the phase space and it has a definite curvature around where you are, you at least know in which direction to move. But if the landscape is completely flat and looks identical in all directions, you have no idea where to move and you will just be randomly wandering around.

If you have two random neural nets 'playing' against each other, they will be issuing random commands. Those very few that happen to be able to build more workers or a pylon, they are closer to proper play. But with a win condition as the objective they perform just as well as a neural network that does absolutely nothing. So the training won't be able to converge because of the objective. But if the objective is to copy replays, then these is a much more gentle and gradual progression in how closely an AI matches a replay. So this is a superior scoring function.

Personally, I think they should just have tried to first emulate their neural networks to copy a simple script that just builds workers, supply, and marines/zerglings/zealots and moves them to the starting location of the enemy. Not top level players. But maybe for that the phase space was too big. It might be that they thought the final result would be the same, but using top player replays would just be a faster method.
Slydie
Profile Joined August 2013
1913 Posts
July 28 2019 18:33 GMT
#178
On July 29 2019 03:19 Muliphein wrote:
Show nested quote +
On July 28 2019 18:50 Acrofales wrote:
I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.

Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".

You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.


We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask.

Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point.

The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


I get both sides but in an RTS, I don't think an AI changing views every millisecond and microing with 40k apm would be interresting on the SC2 ladder. Sure, they could do it, but it would test very different skills than the more limited version which competes with humans on a more reasonable mechanical level and is forced to scout, react and position itself rather than just learning a strong push and microing superhumanly.

If you really want the unlimited bot I am sure you can request it, but I think it is a given it will easily beat any human fairly easily.
Buff the siegetank
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 18:51:09
July 28 2019 18:50 GMT
#179
On July 29 2019 03:33 Slydie wrote:
I get both sides but in an RTS, I don't think an AI changing views every millisecond and microing with 40k apm would be interresting on the SC2 ladder.


I get that it wouldn't be intellectually satisfying to many in the player base. But it would solve a currently unsolved AI problem using a generalized method. There are many real-world AI tasks that would benefit from this. You wouldn't limit an AI doing air traffic control.


Sure, they could do it, but it would test very different skills than the more limited version which competes with humans on a more reasonable mechanical level and is forced to scout, react and position itself rather than just learning a strong push and microing superhumanly.


People say that, but these new AIs are severely limited but still don't play the 'scout and counter' game some people wanted the AI to play.


If you really want the unlimited bot I am sure you can request it, but I think it is a given it will easily beat any human fairly easily.


I can request anything I want. But I think you meant to say that Deepmind will give SC2 players what they want. Which is a bit amazing to me. Are they handing out specific Go AIs to fans of Go? Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.
Cyro
Profile Blog Joined June 2011
United Kingdom20285 Posts
July 28 2019 19:12 GMT
#180
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.
"oh my god my overclock... I got a single WHEA error on the 23rd hour, 9 minutes" -Belial88
necrosexy
Profile Joined March 2011
451 Posts
Last Edited: 2019-07-28 20:07:31
July 28 2019 20:07 GMT
#181
On July 28 2019 22:28 Goolpsy wrote:
The purpose of the AI research is not to "beat humans" or "accomplish task X". We've done that..

It's essentially to be able to make a self-learning AI that can "solve problems".
Why Self-learning? --> Because there are problems we humans don't even understand (or haveexperience in yet), and we'd hope the AI would be able to solve it.
(I am not talking about SKYNET here).

The problem is always, that you need something to measure your product against. How good is it? When should we stop? How much power is required to train it?
Chess and Go were good challenges, because the games themselves are simple with perfect information; AND humans are amazing at them. At the same time, the game have enough possible variations that it is not solvable by a brute-force approach.

Imagine using AI's for selfdriving cars; Driving is easy. But what if a moron drives too close to you? or the car in front drives in the middle of the road? or in your side of the road? What if a deer runs in front of your car. What if you get hit by a bird.
Imagine you get hit by a bird and the AI goes like this: "uhuh weird sensor reading, unexpected error, abort abort .. " and drives off the road.
or you program it to ignore such, and it hits a person: "weird sensor reading.. oh well nevermind"

Back to start Starcraft AI; Humans are really good at starcraft, Obviously macro and micro helps a lot, but our main strength is being able to solve problems (or attempt to solve problems).
Starcraft is a complex game with imperfect information and many many many problems to continually solve.

This is why it is interesting to test the AI's against humans in this area. We are sufficiently good at the game, to be worth competing against (for problems solving skills). Here we have a measure of "how good did we actually become".
Winning with 1500eAPM stalker micro is not solving problems.
Figuring out what to do against an opponent who stalker rushes you and then goes "mass void rays" IS.

I think much of the "disappointment" many are feeling, is not that the AI is unbeatable.. but that they're so EASILY "abusable".
It doesnt understand what AIR is, or where it is (bile drops).
It doesnt understand that turrets can have upgraded range
It doesnt know what a Widowmine is - even when its visible

As for worker scouting; it is not necesarily important (if you are doing aggressive strategies, you are getting information all the time and you can infer A LOT from it).
But not scouting at all and doing a blind build and not adapting to what it eventually sees, is not problem solving :/

It is "funny" however, because we humans think scouting is the EASIEST way to solve the problem of "what to build" and "when to build",
so it's amazing that the AI is still so 'dumb' and 'unrefined' and still doesn't use this "easy" way of overcoming that obstacle.

it hasn't "beat humans"
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 21:05:16
July 28 2019 20:20 GMT
#182
On July 29 2019 04:12 Cyro wrote:
Show nested quote +
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.


That's what I am referring to. Not that sure what happened with Go because I am not that tuned in with the community there. But in chess, if Deepmind started working with a select few chess players, those chess players would gain a huge advantage. Engine analysis is super crucial to your play. So chess is actually being influenced (damaged) by chess engines/AI. The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.

So I would prepare for this in the RTS community.
NinjaNight
Profile Joined January 2018
428 Posts
July 28 2019 20:25 GMT
#183
On July 29 2019 05:20 Muliphein wrote:
Show nested quote +
On July 29 2019 04:12 Cyro wrote:
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.


The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.



What? How do you come up with this claim?
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 21:04:40
July 28 2019 20:59 GMT
#184
RTS are games of execution. In chess, there are positions that are objectively winning but the win is really hard if not impossible to find (doesn't matter if you mean human or engine/AI). In SC2, this doesn't happen. It is straightforward to count economic input and to count army strength (and you assume they perform optimal in a battle).

There are situations that are like bifurcations/double edged, like a base trade scenario. There it can remain unclear what is the right call for a long time, until it has completely unfolded. But in general, in SC2, things are 1 dimensional. In SC BW, things are a bit different and more complicated because things are more positional. People have understood this for a long time, which is why we had the debate about automation when SC2 was announced (and we all know which side was vindicated). SC2 is a game with less strategy and less demands on execution and this was by design.

And the second reason is the very strong AI we have right now. It beats top players. How well it beats them and how well humans can exploit general AI trends (rather than finding a blind spot in a specific AI and exploiting that) is an open question. And it does so in a boring straightforward manner.

So this AI research seems to support these views we in the community already had about the nature of RTS games and the nature of SC2 itself.
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 28 2019 21:09 GMT
#185
On July 29 2019 04:12 Cyro wrote:
Show nested quote +
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.

Not just that. They withheld games, only originally shared a few wins and they put Stockfish on 1 move a minute, which extremely handicaps it from making deep enough calculations. And they played the equivalent of a super computer versus a good desktop. The games were beautiful. The setup... completely unscientific.

They later corrected this was a 1k game match on comparable hardware. Stockfish did better (5 or 9 wins) but still got stomped.

Just remembered, they did this ladder approach on the Go chinese server before throat punching the S.korean world champion.

Unlike chess, Go did not have a computer overlord until alphago.

Maybe after rustling so many feathers in other communities has caused them to listen more. Who knows.

For anyone from the project reading - anything I say that seems critical it is because I am a big fan of the project and I am enthusiastic for the work you do.

Exciting times.
Inrau
Profile Joined June 2019
35 Posts
July 28 2019 23:22 GMT
#186
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.
Muliphein
Profile Joined July 2019
49 Posts
July 28 2019 23:47 GMT
#187
On July 29 2019 08:22 Inrau wrote:
Show nested quote +
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)
Inrau
Profile Joined June 2019
35 Posts
July 28 2019 23:59 GMT
#188
On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)

Because having a mouse trail is part of playing starcraft. What I am saying is that AlphaStar is playing essentially with 3 keyboards and mice. If we were playing an XBOX RTS where you didn't micro and only moved the camera around with preset controller actions commanding the squads, I would buy your argument.

And I think your smartass comment about massing stalkers, forgot that over time players adapt and defended 4 gates by squeezing out an immortal, or whatever the meta changes to. If the game was so simple, AlphaStar would have already found the exact build and rolled over everyone. But because the game is so complex and massive, they have to potty train the AI to act like humans, because without it, it cant function.
cha0
Profile Joined March 2010
Canada504 Posts
July 29 2019 00:21 GMT
#189
You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.

On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)

Muliphein
Profile Joined July 2019
49 Posts
July 29 2019 00:24 GMT
#190
So Alphastar is not truly playing SC2 because it isn't using a (virtual?) keyboard? If you want to hold that position; fine. But I think it would be a waste of time to debate against that.

So the AI is doing something that resembles playing SC2 and in doing so it is solving an open AI problem.

You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition? That's not at all what I see. I see it play straightforward games strategically and I see that while often wins, it is still making mistakes in microing and engaging battles. But that most of the time, micro, macro, and deciding when to fight are superior to that of its human opponents so that it mostly wins anyway. And in that the games it loses, the human player is able to find a weakness or blind spot and exploit it, leaving the AI to repeat the same mistake over and over again.

Yes, SC2 is a game with a huge game state and input space. And that causes problems for machine learning. Which is why it is meaningful that Deepmind is able find a way to beat strong humans (and why it doesn't matter that the AI looks stupid or exploitable as long as it is winning.) But this complexity (it is not actual 'complexity', it is complicated in having a huge phase space. Complexity is when a small change can completely upturn an outcome and that is rarely the case in RTS) you speak of and 'outhinking the human player using superior strategy humans were unable to conceive' are completely disconnected.

The actual issue is if the style of play it has right now can be streamlined to beat the top players. Or if neural networks are fundamentally incapable of outplaying humans because of a teachnical limitation (for example treating the game essentially as a Markov chain, ignoring the game history).
Muliphein
Profile Joined July 2019
49 Posts
July 29 2019 00:35 GMT
#191
On July 29 2019 09:21 cha0 wrote:
You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.



How can you say something like this after I said that there cannot possibly be such a thing as fairness in humans Vs AI.

But you do admit that you think the way an AI plays SC2 isn't really interesting to you. Why do people have this strange idea? There is a reason why in general people avoid using chess engines while comnentating chess game. What the engine sees is usually completely irrelevant for what is going to happen on the board exactly because the AI plays in a way humans cannot emulate. And the AI move suggestion also tell you nothing about the strategic themes in the game.

So your argumentum as absurdum is exactly the state of AI in chess.

Then you end your post with an utterly false statement. Yes, in principle you could. But no one has because it is extremely difficult. You act as if Alphastar does something all AI always already capable of while claiming it will teach us new things about the game. Did you even read my posts? This is exactly the misunderstanding I argued against before you replied.
Xain0n
Profile Joined November 2018
Italy3963 Posts
Last Edited: 2019-07-29 00:49:14
July 29 2019 00:41 GMT
#192
On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
July 29 2019 01:26 GMT
#193
I love AlphaDepot micro where it blocks its own units out. I wonder if it thinks that depots are a good way of making a jail for the enemy army or something.
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
Inrau
Profile Joined June 2019
35 Posts
Last Edited: 2019-07-29 02:33:22
July 29 2019 02:32 GMT
#194
On July 29 2019 09:24 Muliphein wrote:
You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition?

That is correct. It has no idea what to do besides learning the builds from other players and microing "perfectly." Alphastar would get wrecked if players played against it over and over and over like some sort of INSANE AI challenge. I see nothing special at this point.
EDIT: Even with the advantage alphastar has APM / vision wise.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 02:56:29
July 29 2019 02:53 GMT
#195
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


On July 29 2019 09:41 Xain0n wrote:
Show nested quote +
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.


In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.


Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.


After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.
loft
Profile Joined July 2009
United States344 Posts
July 29 2019 06:20 GMT
#196
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar. Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.
terribleplayer1
Profile Joined July 2018
95 Posts
July 29 2019 06:30 GMT
#197
Well, even with how inhuman it is, it's going to lose a lot more once people realize it's an opponent that doesnt scout.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 06:49:10
July 29 2019 06:43 GMT
#198
On July 29 2019 15:20 loft wrote:
Show nested quote +
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar.


There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness.


Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.


This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar.

But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent."

And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well.
-Archangel-
Profile Joined May 2010
Croatia7457 Posts
Last Edited: 2019-07-29 08:30:45
July 29 2019 08:30 GMT
#199
On July 29 2019 15:43 Muliphein wrote:
Show nested quote +
On July 29 2019 15:20 loft wrote:
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar.


There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness.

Show nested quote +

Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.


This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar.

But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent."

And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well.

Wasn't the point of this project to get AI that can solve problems? Having inhuman micro is not solving problems.

It is like sending you to fight Superman. Superman will learn nothing beating your 1 000 000 times while all you might eventually do is somehow find kryptonite and beat him without it ever being a fair fight.
Poopi
Profile Blog Joined November 2010
France12795 Posts
July 29 2019 08:57 GMT
#200
On July 29 2019 11:53 Muliphein wrote:
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


Show nested quote +
On July 29 2019 09:41 Xain0n wrote:
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.

Show nested quote +

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.

Show nested quote +

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.

Show nested quote +

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.

Why are there so many low count posts acting superior while spilling semi bs about how AI works on these DeepMind threads? That was the same on the other thread.

I’m pretty sure (idk if it’s that way for these ladder agents tho) that AlphaStar used imitation learning at the beginning so it indeed used human replays, not only self play. That was why they guessed it spammed clicks because of humans doing so.
WriterMaru
Xain0n
Profile Joined November 2018
Italy3963 Posts
July 29 2019 10:10 GMT
#201
On July 29 2019 11:53 Muliphein wrote:
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


Show nested quote +
On July 29 2019 09:41 Xain0n wrote:
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.

Show nested quote +

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.

Show nested quote +

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.

Show nested quote +

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.


Let's start from the conclusion, then. If Deepmind's goal was yours, why would they apply limitations at all?
Why would they ever step back from the iteration that beat Mana with inhuman map awareness and stalker micro?
Maybe they don't just want to create the strongest possible AI playing sc2? They are doing that "not to offend sc2 community"? Why would we ever get offended? Machines have been mechanichally outperforming men for a long time already.
I didn't call Deepmind complaining on how they should please my intellectual curiosity, they are choosing themselves to force Alphastar to resemble a human more with every single step.

You are right, I don't know what Alphafold is doing or how much money is invested on that project; I just don't see why would you choose a game as complex as Sc2 if your goal would just be to make a neural network perform a task much faster and much more precisely than humans with no "decision making" involved.
AlphaGo sees pattern human mind can't, but we can try to learn from it by studying its moves; if Alphastar uses 40k apm, we can witness such prowess and learn nothing.

So you get annoyed on our lack of understanding regarding Deepmind and Alphastar? Do I have to remind you Team Liquid is a forum focused on RTS games?
Go somewhere else if you want to discuss the intricacies of neural networks with people understanding them as much as you do.

When we come to sc2 itself, how can you affirm sc2 is not a game of strategy? Have you, Muliphein, solved the game? It seems pure conceit to me.
Sc2 surely is a game of strategy when two mechanichally limited humans play it while it probably is as you say when an AI faces a human; how can you know how the game looks like when two unbound agents are playing it?
TitanEX1
Profile Joined June 2019
14 Posts
July 29 2019 11:53 GMT
#202
Currently Casted Live. Our announcement:

CobaltBlu
Profile Blog Joined August 2009
United States919 Posts
July 29 2019 13:32 GMT
#203
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
Last Edited: 2019-07-29 14:23:33
July 29 2019 14:15 GMT
#204
On July 29 2019 22:32 CobaltBlu wrote:
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.

I think they want to test the AI interaction against humans, not people interaction against AI (the latter would result in abusive strategies that wouldn't be played against humans)

If anyone answers "would I play differentally had I known I play AI" - YES, then barcode is valid. Considering smoe reactions in this thread...

Edit>
At the same time I wouldn't mind seeing how abusive people would get against verified agents, so they may want to go into both types as this would be an interesting experiment either.
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
ShoCkeyy
Profile Blog Joined July 2008
7815 Posts
Last Edited: 2019-07-29 14:24:49
July 29 2019 14:24 GMT
#205
On July 29 2019 23:15 deacon.frost wrote:
Show nested quote +
On July 29 2019 22:32 CobaltBlu wrote:
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.

I think they want to test the AI interaction against humans, not people interaction against AI (the latter would result in abusive strategies that wouldn't be played against humans)

If anyone answers "would I play differentally had I known I play AI" - YES, then barcode is valid. Considering smoe reactions in this thread...

Edit>
At the same time I wouldn't mind seeing how abusive people would get against verified agents, so they may want to go into both types as this would be an interesting experiment either.


Your edit was my initial post, thanks for that. I was going to say, it'll be cool to see both variations.
Life?
Haukinger
Profile Joined June 2012
Germany131 Posts
July 29 2019 14:37 GMT
#206
On July 29 2019 08:22 Inrau wrote:
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.


These are limitations of the game client, not from the game. The game is just the rules, e.g. when issuing an attack order to a marine to a target within range, it will instantly do x damage. Or when issuing a blink order to a stalker, it will instantly blink.

Remove the "instantly" from the rules, i.e. introduce universal cooldown and lag, and AI and human are on equal grounds. Not to mention you'd also remove exploits like stutterstepping or so called "warpprism micro".
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 29 2019 15:33 GMT
#207
On July 29 2019 23:37 Haukinger wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.


These are limitations of the game client, not from the game. The game is just the rules, e.g. when issuing an attack order to a marine to a target within range, it will instantly do x damage. Or when issuing a blink order to a stalker, it will instantly blink.

Remove the "instantly" from the rules, i.e. introduce universal cooldown and lag, and AI and human are on equal grounds. Not to mention you'd also remove exploits like stutterstepping or so called "warpprism micro".

It isn't really a limitation of the game client at all. It's an issue with human ability to perform a maximum number of actions per minute. The game client's ability to process actions per minute isn't the bottleneck there. It's a human control issue. It is simply easier to select the whole army and then drag the tanks elsewhere than to select each part of the army (or even, each unit individually) and give them different commands. Because it is so much easier to do, that makes it *more* optimal for a human to do the theoretically less optimal army micro (because the tanks spenda few milliseconds moving in the wrong direction). Meanwhile, the AI doesn't have this issue, so direct each part of the army immediately to its position. This ties in a bit to my earlier response to Muliphein, so I will continue that conversation here as well.

On July 29 2019 03:19 Muliphein wrote:
Show nested quote +
On July 28 2019 18:50 Acrofales wrote:
I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.

Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".

You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.


We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask.

Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point.

The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


Sure, the mind *might* be the bottleneck in hand-eye coordination, but I doubt it. I suspect that eAPM would be a lot higher if we had a perfect brain-starcraft interface. It only takes watching a few games by progamers to know that hand-eye coordination is a large part of the mechanics needed to play SC2, and a misclick (not a misthought, just a mistake in clicking on the wrong pixel) can cost you the game. However, as an AI researcher myself, I am quite confident when I say that making a perfect micro bot is not the part that the AlphaStar researchers are interested in. You don't throw tons of supercomputing resources to make a perfect micro bot. They aren't interested in "winning" at starcraft per se. It's just that winning at starcraft is a good benchmark for how good they are at solving a specific type of problem. They are interested in the problems of planning and adapting a strategy hampered by "real world" limitations.



Show nested quote +

Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times,


I don't think this is an accurate account of the consensus, if there was any, at that time. Decades ago, it was actually a minority that correctly recognized that the brain is a machine like any other. And that in principle a machine could be build that does the same thing as a brain, only better. Respectable scientists for a long time placed the brain outside of any biological context. General principles of biology were not applied to it. Only with the rise of cognitive science did this change.

AI has gone through a number of "winters". The first of these was in the late 60s and 70s when it was clear that machines were not soon going to be "more intelligent" than humans despite early breakthroughs such as winning at backgammon or robots being able to correctly recognize simple objects and colors.

And you don't need to bring Cartesian duality in here, but if you do, there have been philosophers since the early 20th century who have questioned that duality, and the more we have learned about the brain, biology and particularly *computation*, the stronger the criticisms became. In particular, early AI researchers in the 60s didn't give two hoots about such arguments, and the Turing test as an evaluation tool for AI should make that clear. Note that the mind-brain duality argument is still not completely settled, although imho anybody arguing in favour of dualism is not understanding the concept of emergence.

The second AI winter was in the 90s and 00s, when it was clear that neural networks and expert machines *also* had serious limitations and despite early successes in visual object recognition and automated logical reasoning, there were still obvious gaps in what AIs could do. Deep learning has made AI, once again, reemerge from a winter. A cautious man would be hesitant to declare the problem will now be solved. In particular, things like abstract moral decision making and introspection are things that we don't really know how to do right now, and while deep learning looks a lot like a miracle, it is the same old neural networks we used in the 80s, but with more computing power and better optimization algorithms. Of course, I could also be describing a human brain...


But you are right that for the last decades it was just an issue of actually building a machine, because it proved to be quite challenging. Yes, it is true in some sense that just raw calculation wouldn't be enough. But it is very easy to calculate the phase space of Go and to then see that raw calculation was never going to solve that. And we have known for a long time that humans use pattern recognition properties of a neural network to play these games so well.

In fact, the opposite is true as people thought chess and go would be 'safe' from computers for a decade or two more than they actually were.

Show nested quote +
...as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task.


This is besides the point, but I beg to differ. Doing complex tasks is quite challenging for robots. It would be extremely challenging to build a robot that a human top rugby player could control using some VR interface (like in Avatar) that would allow for a similar level of play as the actual rugby player playing himself. We are decades off from that. But you were actually trying to make another point. So be careful with your language.

Sure, I don't really know how hard it is to build a robot that could play rugby. I'd argue that all you need is a remote control car with enough armored plating and horsepower, and a "ball catching, and holding mechanism". But it's beside the point. If you don't like the rugby tank, just stick to the racecar for "running" a sprint. It is an uninteresting problem. It becomes interesting when we add restrictions such as "the 100m dash must be run on 2 legs", because bipedal robotic running is something we still haven't solved adequately (although we are getting better at it).


Show nested quote +

But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.


So which one is it? Did we take way longer to solve these games? Or did we do it earlier than expected?

Both? You know I was talking about 60 years of history, with periods of unbridled optimism and AI winters of doom and gloom?


Show nested quote +

At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.


Perfect micro is an AI challenge. Not a 'how fast can I issue commands through an embedded systems interface'-challenge. That it is not the AI challenge most people are interested in, for the simple reason that it learns human players nothing new about the game, is besides the point.

It may be the case that in SC2, unlike in chess and go, an AI can play way way above the best humans without doing anything that humans hadn't realized or discovered themselves.

This all comes back to one important point. RTS games are games of execution and small scale decision making(tactics). They are not games of strategy. And their complexity is quite basis. There aren't layers upon layers that reshape how the game is played as you ascend the skill curve. Yes, the move space is huge and sparse, but in essence it is a straightforward game. Build an army stronger than your opponent, then force a fight and win the game. That's the entire game in a nutshell.


See above, I disagree. Mechanics are part of it, and the "least interesting" part from an AI perspective, but SC2 is definitely a game of strategy if you add limitations to the mechanics. The "build an army stronger than your opponent's and go and kill him with it" is a rather simplistic way of looking at it. I have no doubt that a completely perfectly executed blink stalker warp prism immortal rush "solves" the game if you allow 10000 APM (or so). And then you can definitely say that strategy is irrelevant, as the only thing to figure out is optimal movements on a map, which is a bit of a trivial problem. But if you limit the possible actions, you find that overall strategies become important, and it is not at all obvious what army is the strongest army and what is the best way to get there without just dying first. E.g. 3rd CC before Rax is sometimes possible, but straight up build order countered by plenty of early game aggression builds. But being a little bit more greedy than your opponent is generally a good strategy to get an advantage in the long term, and timing attacks exist to punish opponents exactly at moments when you expect them to be greedy and your aggression can punish them. 10k APM blink stalker micro would indeed thwart all these puny attacks, but it is irrelevant to SC as we understand the game, where strategy plays a real role. And it is exactly that part of the game that AlphaStar is designed to "solve", just as AlphaGo "solved" Go (a game where hand-eye coordination is mostly irrelevant).
alexanderzero
Profile Joined June 2008
United States659 Posts
Last Edited: 2019-07-29 16:20:10
July 29 2019 16:17 GMT
#208
Regarding AlphaStar's apparent lack of strategy I really do question whether or not its a problem with the scale/computing power of the neural network, or a design flaw. People say that AlphaStar doesn't have the ability to react to things but that's not exactly true. The decisions that it makes during battles are direct responses to the things done by the opponent, like flying its phoenixes around and picking off units that venture too far from the group, and then engaging fully once it has a large enough army advantage.

I know that people make this distinction between tactics and strategy, but this is an artificial boundary that exists in the minds of humans. There is nothing fundamental about the theory of the game that justifies this division. The fact that it is able to think tactically is evidence that there are aspects of the game that it does understand and have the capability to reason about. Presumably if it's capacity to reason was increased to include more variables, it would start considering thing like scouting and tech switches more often. That, and more training time to allow it to do more experiments and map out more of the game.
I am a tournament organizazer.
skdsk
Profile Joined February 2019
138 Posts
July 29 2019 16:51 GMT
#209
http://vod.afreecatv.com/PLAYER/STATION/46401370 vod of the alphastar cast event...
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 19:47:26
July 29 2019 19:35 GMT
#210
On July 29 2019 17:30 -Archangel- wrote:
Wasn't the point of this project to get AI that can solve problems? Having inhuman micro is not solving problems.


You have got to be fucking kidding!


It is like sending you to fight Superman. Superman will learn nothing beating your 1 000 000 times while all you might eventually do is somehow find kryptonite and beat him without it ever being a fair fight.


It is not about learning about SC2. It is about learning how to set up deep learning problems. And stop talking about fairness.
And the thing you hope AI will tell you about SC2 is very likely not there. People keep talking about the AI discovering new builds that humans can copy to become better. It is not going to happen because it is not relevant to high level AI play. An AI does not have the weakness that it wants to be 'clever'. And a deep learning AI will just relentlessly play the way it thinks is optimal.

There is not even a discussion that it it possible to find a hole in a deep learning AI. The AI is only as good as its training. Take the simple case of a 'Is it a cat or a dog' image recognition AI. If you provide an image of either a cat or dog in a very unusual pose, the AI might fail terribly, even though to us humans it is clearly a cat or dog. With any deep learning AI you can find input data where the AI will get it horribly wrong. But the point is that this is a tiny subset of the real input data where it fails, while for the vast majority of the input it does very well (and either outperforms humans overall or is most cost-efficient economy-wise even if humans are better). This is why when you watch the Alphago documentary, they were afraid of AlphaGo going 'delusional'.

A deep learning AI will not engage in mindgames and it will not cut corners and take risk on BOs in interesting ways. It either is fundamentally incapable of doing so, because it cares only about winning and not about being clever, because it isn't concluding anything or doing reasoning or deduction, because it has been trained playing other AIs, because it is a generalized algorithm that does the same thing for a specific game state, and because it isn't emotional or insecure. Or it won't because it is fundamentally suboptimal to play that way. And this makes sense because players like Flash also don't try to play a 'strategic' game. The AI just presents its best play and if that is not good enough it will stubbornly lose without adapting. Humans have insecurities and feel the need to outsmart their opponent. They want to do something to get an edge. They fear their opponent tricking them. They fear that playing straight up they will lose. They feel that in this match they need to do something that will guarantee them the win. A human will not be satisfied with a 51% win chance. It will try to come up with something to do better. The AI doesn't care. Hence, the AI has no need for doing marginal plays that may result into huge rewards. It will simply not explore that part of the phase space, even if there are pockets there that are really good, because overall this is a losing part of the phase space. The AI will converge in a smooth and consistent part of the phase space where it is easy to move into better versions of itself, as the network is being trained.
Sadistx
Profile Blog Joined February 2009
Zimbabwe5568 Posts
July 31 2019 06:10 GMT
#211
If there's anything I learned from Deep AI projects (including the Texas Hold'em NL 6-max poker AI released recently), is that AI optimizes for unexploitability, which in the context of SC2 is for least risky strategies. I believe the term used is 'Regret minimization'. It seems logical.

That it achieves a win rate of above 50% while doing this is just a side effect of what it optimizes for.

I'm honestly not particularly educated in this field, though, so correct me if what I typed is nonsense!
Acrofales
Profile Joined August 2010
Spain17979 Posts
July 31 2019 08:13 GMT
#212
On July 31 2019 15:10 Sadistx wrote:
If there's anything I learned from Deep AI projects (including the Texas Hold'em NL 6-max poker AI released recently), is that AI optimizes for unexploitability, which in the context of SC2 is for least risky strategies. I believe the term used is 'Regret minimization'. It seems logical.

That it achieves a win rate of above 50% while doing this is just a side effect of what it optimizes for.

I'm honestly not particularly educated in this field, though, so correct me if what I typed is nonsense!

Actually it maxes its reward function. You can definitely do regret minimization by building that into the reward function (or the optimization algorithm), but there's no reason to assume that was applied. In a game with almost rock-paper-scissors like strategies, and the bots trained by adversarial games, I'm not even sure what to look for to distinguish a bot with regret minimization and without.
Equalizer
Profile Joined April 2010
Canada115 Posts
July 31 2019 16:36 GMT
#213
At least according to Deepmind's blog post (https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) they trained using a mixture over agent strategies using the game theory concept of the Nash equilibrium.

The basic point is that even though it may play a strategy that has a hard countered it should at randomly choose other strategies that would do well against this counter some of the time. I suppose this makes the most sense for openings but after that perhaps not so much.

What is odd is that in the games identified to almost certainly be against AlphaStar seems to have very little randomness so they may of just chosen the agent with the highest win rate for real world testing.
The person who says it cannot be done, should not interrupt the person doing it.
DimmuKlok
Profile Joined June 2010
United States225 Posts
July 31 2019 16:54 GMT
#214
How does AlphaStar deal with cloaked units? Cloaked units are technically visible but rely on the human element to not be detected.
Acrofales
Profile Joined August 2010
Spain17979 Posts
August 01 2019 09:49 GMT
#215
On August 01 2019 01:54 DimmuKlok wrote:
How does AlphaStar deal with cloaked units? Cloaked units are technically visible but rely on the human element to not be detected.

That depends on the API, but insofar as I know, that is deterministic, so if the AI is looking at the right part of the map, it will "see" the cloaked units. Whether it reacts is then part of AlphaStar. That, in turn, is heavily dependent on whether this situation occurred sufficiently often with enough salience to train a counter.

If you recall the showmatches, it reacted instantly and decisively when DTs appeared, but that was with full map vision. With only a "screen" sized area visible at any time, it may not have trained enough with that. Or maybe it did and reacts well?
Normal
Please log in or register to reply.
Live Events Refresh
Next event in 1d 6h
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
WinterStarcraft336
Nina 207
Livibee 126
SpeCial 118
NeuroSwarm 102
ProTech62
StarCraft: Brood War
Sharp 77
Noble 59
Icarus 4
League of Legends
JimRising 1033
Super Smash Bros
hungrybox732
Other Games
summit1g16201
shahzam935
Maynarde218
ViBE91
Organizations
Other Games
gamesdonequick2236
BasetradeTV48
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Berry_CruncH214
• Hupsaiya 72
• davetesta51
• practicex 15
• Kozan
• Migwel
• AfreecaTV YouTube
• sooper7s
• intothetv
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota21516
League of Legends
• Rush1533
• Stunt319
Upcoming Events
Esports World Cup
1d 6h
ByuN vs Astrea
Lambo vs HeRoMaRinE
Clem vs TBD
Solar vs Zoun
SHIN vs Reynor
Maru vs TriGGeR
herO vs Lancer
Cure vs ShoWTimE
Esports World Cup
2 days
Esports World Cup
3 days
Esports World Cup
4 days
CranKy Ducklings
5 days
BSL20 Non-Korean Champi…
5 days
BSL20 Non-Korean Champi…
5 days
Bonyth vs Sziky
Dewalt vs Hawk
Hawk vs QiaoGege
Sziky vs Dewalt
Mihu vs Bonyth
Zhanhun vs QiaoGege
QiaoGege vs Fengzi
FEL
6 days
BSL20 Non-Korean Champi…
6 days
BSL20 Non-Korean Champi…
6 days
Bonyth vs Zhanhun
Dewalt vs Mihu
Hawk vs Sziky
Sziky vs QiaoGege
Mihu vs Hawk
Zhanhun vs Dewalt
Fengzi vs Bonyth
Liquipedia Results

Completed

CSL Xiamen Invitational
Championship of Russia 2025
Murky Cup #2

Ongoing

Copa Latinoamericana 4
Jiahua Invitational
BSL20 Non-Korean Championship
CC Div. A S7
Underdog Cup #2
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25

Upcoming

CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
BSL Season 21
RSL Revival: Season 2
SEL Season 2 Championship
uThermal 2v2 Main Event
FEL Cracov 2025
Esports World Cup 2025
HCC Europe
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.