AlphaStar released: Deepmind Research on Ladder - Page 3
Forum Index > SC2 General |
Boggyb
2855 Posts
| ||
necrosexy
451 Posts
| ||
Zerg.Zilla
Hungary5029 Posts
On July 11 2019 03:52 sugarmuffinpuff wrote: I only play opponents whose mothers I can insult when I lose. lol | ||
Mountain_Lee
87 Posts
lol | ||
Harris1st
Germany6691 Posts
On July 11 2019 10:34 Loccstana wrote: Cool, I hope Deepmind will release some vods where AlphaStar defeats Serral. Never gonna happen. Serral is the T1000 | ||
Bomzj
Belarus24 Posts
| ||
Acrofales
Spain17832 Posts
| ||
ZenithM
France15952 Posts
It's great publicity for Google's R&D (I know Deepmind is an Alphabet subsidiary, but let's not kid ourselves there ![]() | ||
seemsgood
5527 Posts
but da A.I is arnold schwarzenegger | ||
Deleted User 3420
24492 Posts
Well, I guess that's the advantage of being google and having the best AI. | ||
Acrofales
Spain17832 Posts
On July 11 2019 21:44 travis wrote: So they are going to let google research on the ladder but all other AI creators are forbidden from playing on the ladder? Well, I guess that's the advantage of being google and having the best AI. Nobody at Google or at Actiblizzard claimed they were trying to make a fair playing field for creating SC2 AI, so I don't really know why you're surprised. At the moment, it's great PR for both those companies. And yes, I'm jealous too. | ||
sudete
Singapore3053 Posts
Assuming the bot gets high enough on the ladder, it should be fairly obvious to progamers / hardcore players even if the agents use a phoney name | ||
![]()
zatic
Zurich15313 Posts
| ||
Shuffleblade
Sweden1903 Posts
Because the above factors were not restricted or not enough restricted Alphastar managed to defeat TLO and Mana but the reason it won was sheer micro and because it had access to vision no human could have. If they have made Alphastar more fair and actually wants to try and create an AI that can handle the stracraft game of strategic decisionmaking based on limited information there is no way alphastar can already be GM level. I do think they can reach master but I think most agenst will be around dia level, just pure guesswork from my side but we will see. | ||
NinjaNight
428 Posts
| ||
jalstar
United States8198 Posts
Also, bnet IDs are readable from memory when you're in-game, but this may be against TOS. | ||
alexanderzero
United States659 Posts
On July 12 2019 00:46 NinjaNight wrote: I'm just trying to figure out if it's worth que'ing to play AlphaStar or will it be too rare to get a match against it? If you're not a high masters player already then it's unlikely you will ever get a match. The people who are ranked highly enough are already playing on the ladder for several hours a day. | ||
NinjaNight
428 Posts
On July 12 2019 01:03 alexanderzero wrote: If you're not a high masters player already then it's unlikely you will ever get a match. The people who are ranked highly enough are already playing on the ladder for several hours a day. I'm not sure about that. It's possible (most likely??) that it's only around diamond level or something now thanks to the further APM restrictions and camera restrictions now. I doubt it's going to be GM yet. Also a deepmind guy in the general SC2AI discord recently said "don't expect too much from us when it reaches ladder" which suggests it doesn't have grandmaster ability. So the first question is if AlphaStar is in diamond and you're in diamond are you likely to run into it after say 8 hours of laddering or is there too many other people near that level queing so you're still unlikely to face it? Second question is are the different versions of AlphaStar going to be dispersed among different leagues like gold, plat, diamond, and masters? We could start there. | ||
UnLarva
458 Posts
I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2. | ||
Acrofales
Spain17832 Posts
On July 12 2019 01:54 UnLarva wrote: I assume they won't run AlphaStar agents in ladder with similar computational resources they did when the project was under development. Are these agents installed/integrated to Blizzard's servers or does they still operate at Google's supercomputers requiring extra internet connection between the agents' home server and Blizzard's home/ladder server, or are these both in same geographic location? I also think that the learning curve on the ladder will be several magnitudes slower as games are actually played with humanly possible game speed. Million games agent vs agent using TPU supercomputers take a lot less time than million games human player vs agent, and doesn't require any kind visually displayed interface like actual SC2. Depends on how many players there are on the ladder and what Blizzard lets them do. I don't know how many games they could run in parallel to train their networks, but if they dedicate the same resources to ladder games they can play orders of magnitude more in parallel due to each game being far slower and thus needing far less resources to still reach the maximum APM they set. I'm going to assume blizzard doesn't let them run millions of instantiations of a bot all at once, though, so training should indeed slow down. That said, we don't even know what they are doing. This might be validation. It might just be collecting training data on human vs AI play which they can use to train further, it might just be for PR. Or they are testing some new online learning method that can learn from what is happening in the game (and thus, could, in theory, eventually learn to avoid f2ing continuously to chase warp prisms). E: actually we do know what they're doing, and they aren't training: Q. How many variants of AlphaStar will play? A. DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. Q. Will AlphaStar improve as it plays on the ladder? Will my games be used to help improve its strategy? A. AlphaStar will not be learning from the games it plays on the ladder, as DeepMind is not using these matches as part of AlphaStar’s training. To date, AlphaStar has been trained from human replays and self-play, not from matches against human players. | ||
| ||