• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 15:24
CET 21:24
KST 05:24
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Behind the Blue - Team Liquid History Book15Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8herO wins SC2 All-Star Invitational14
Community News
ACS replaced by "ASL Season Open" - Starts 21/0220LiuLi Cup: 2025 Grand Finals (Feb 10-16)25Weekly Cups (Feb 2-8): Classic, Solar, MaxPax win2Nexon's StarCraft game could be FPS, led by UMS maker10PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar)13
StarCraft 2
General
How do you think the 5.0.15 balance patch (Oct 2025) for StarCraft II has affected the game? Nexon's StarCraft game could be FPS, led by UMS maker Terran Scanner Sweep Behind the Blue - Team Liquid History Book Weekly Cups (Jan 12-18): herO, MaxPax, Solar win
Tourneys
RSL Season 4 announced for March-April LiuLi Cup: 2025 Grand Finals (Feb 10-16) PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar) RSL Revival: Season 4 Korea Qualifier (Feb 14) Sparkling Tuna Cup - Weekly Open Tournament
Strategy
Custom Maps
Map Editor closed ? [A] Starcraft Sound Mod
External Content
The PondCast: SC2 News & Results Mutation # 512 Overclocked Mutation # 511 Temple of Rebirth Mutation # 510 Safety Violation
Brood War
General
Which units you wish saw more use in the game? ACS replaced by "ASL Season Open" - Starts 21/02 StarCraft player reflex TE scores [ASL21] Potential Map Candidates Gypsy to Korea
Tourneys
[Megathread] Daily Proleagues Escore Tournament StarCraft Season 1 Small VOD Thread 2.0 KCM Race Survival 2026 Season 1
Strategy
Fighting Spirit mining rates Zealot bombing is no longer popular? Simple Questions, Simple Answers Current Meta
Other Games
General Games
Nintendo Switch Thread Path of Exile Diablo 2 thread Battle Aces/David Kim RTS Megathread ZeroSpace Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia
Community
General
US Politics Mega-thread Ask and answer stupid questions here! European Politico-economics QA Mega-thread The Games Industry And ATVI Russo-Ukrainian War Thread
Fan Clubs
The IdrA Fan Club The herO Fan Club!
Media & Entertainment
[Req][Books] Good Fantasy/SciFi books [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
ADHD And Gaming Addiction…
TrAiDoS
My 2025 Magic: The Gathering…
DARKING
Life Update and thoughts.
FuDDx
How do archons sleep?
8882
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1901 users

AlphaStar released: Deepmind Research on Ladder - Page 11

Forum Index > SC2 General
214 CommentsPost a Reply
Prev 1 9 10 11 All
Xain0n
Profile Joined November 2018
Italy3963 Posts
July 29 2019 10:10 GMT
#201
On July 29 2019 11:53 Muliphein wrote:
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


Show nested quote +
On July 29 2019 09:41 Xain0n wrote:
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.

Show nested quote +

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.

Show nested quote +

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.

Show nested quote +

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.


Let's start from the conclusion, then. If Deepmind's goal was yours, why would they apply limitations at all?
Why would they ever step back from the iteration that beat Mana with inhuman map awareness and stalker micro?
Maybe they don't just want to create the strongest possible AI playing sc2? They are doing that "not to offend sc2 community"? Why would we ever get offended? Machines have been mechanichally outperforming men for a long time already.
I didn't call Deepmind complaining on how they should please my intellectual curiosity, they are choosing themselves to force Alphastar to resemble a human more with every single step.

You are right, I don't know what Alphafold is doing or how much money is invested on that project; I just don't see why would you choose a game as complex as Sc2 if your goal would just be to make a neural network perform a task much faster and much more precisely than humans with no "decision making" involved.
AlphaGo sees pattern human mind can't, but we can try to learn from it by studying its moves; if Alphastar uses 40k apm, we can witness such prowess and learn nothing.

So you get annoyed on our lack of understanding regarding Deepmind and Alphastar? Do I have to remind you Team Liquid is a forum focused on RTS games?
Go somewhere else if you want to discuss the intricacies of neural networks with people understanding them as much as you do.

When we come to sc2 itself, how can you affirm sc2 is not a game of strategy? Have you, Muliphein, solved the game? It seems pure conceit to me.
Sc2 surely is a game of strategy when two mechanichally limited humans play it while it probably is as you say when an AI faces a human; how can you know how the game looks like when two unbound agents are playing it?
TitanEX1
Profile Joined June 2019
14 Posts
July 29 2019 11:53 GMT
#202
Currently Casted Live. Our announcement:

CobaltBlu
Profile Blog Joined August 2009
United States919 Posts
July 29 2019 13:32 GMT
#203
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
Last Edited: 2019-07-29 14:23:33
July 29 2019 14:15 GMT
#204
On July 29 2019 22:32 CobaltBlu wrote:
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.

I think they want to test the AI interaction against humans, not people interaction against AI (the latter would result in abusive strategies that wouldn't be played against humans)

If anyone answers "would I play differentally had I known I play AI" - YES, then barcode is valid. Considering smoe reactions in this thread...

Edit>
At the same time I wouldn't mind seeing how abusive people would get against verified agents, so they may want to go into both types as this would be an interesting experiment either.
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
ShoCkeyy
Profile Blog Joined July 2008
7815 Posts
Last Edited: 2019-07-29 14:24:49
July 29 2019 14:24 GMT
#205
On July 29 2019 23:15 deacon.frost wrote:
Show nested quote +
On July 29 2019 22:32 CobaltBlu wrote:
I would like to see them release it on ladder for longer period of time with no barcode. I want to see how fragile it is vs novel strategies.

I think they want to test the AI interaction against humans, not people interaction against AI (the latter would result in abusive strategies that wouldn't be played against humans)

If anyone answers "would I play differentally had I known I play AI" - YES, then barcode is valid. Considering smoe reactions in this thread...

Edit>
At the same time I wouldn't mind seeing how abusive people would get against verified agents, so they may want to go into both types as this would be an interesting experiment either.


Your edit was my initial post, thanks for that. I was going to say, it'll be cool to see both variations.
Life?
Haukinger
Profile Joined June 2012
Germany131 Posts
July 29 2019 14:37 GMT
#206
On July 29 2019 08:22 Inrau wrote:
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.


These are limitations of the game client, not from the game. The game is just the rules, e.g. when issuing an attack order to a marine to a target within range, it will instantly do x damage. Or when issuing a blink order to a stalker, it will instantly blink.

Remove the "instantly" from the rules, i.e. introduce universal cooldown and lag, and AI and human are on equal grounds. Not to mention you'd also remove exploits like stutterstepping or so called "warpprism micro".
Acrofales
Profile Joined August 2010
Spain18215 Posts
July 29 2019 15:33 GMT
#207
On July 29 2019 23:37 Haukinger wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.


These are limitations of the game client, not from the game. The game is just the rules, e.g. when issuing an attack order to a marine to a target within range, it will instantly do x damage. Or when issuing a blink order to a stalker, it will instantly blink.

Remove the "instantly" from the rules, i.e. introduce universal cooldown and lag, and AI and human are on equal grounds. Not to mention you'd also remove exploits like stutterstepping or so called "warpprism micro".

It isn't really a limitation of the game client at all. It's an issue with human ability to perform a maximum number of actions per minute. The game client's ability to process actions per minute isn't the bottleneck there. It's a human control issue. It is simply easier to select the whole army and then drag the tanks elsewhere than to select each part of the army (or even, each unit individually) and give them different commands. Because it is so much easier to do, that makes it *more* optimal for a human to do the theoretically less optimal army micro (because the tanks spenda few milliseconds moving in the wrong direction). Meanwhile, the AI doesn't have this issue, so direct each part of the army immediately to its position. This ties in a bit to my earlier response to Muliphein, so I will continue that conversation here as well.

On July 29 2019 03:19 Muliphein wrote:
Show nested quote +
On July 28 2019 18:50 Acrofales wrote:
I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.

Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".

You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.


We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask.

Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point.

The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


Sure, the mind *might* be the bottleneck in hand-eye coordination, but I doubt it. I suspect that eAPM would be a lot higher if we had a perfect brain-starcraft interface. It only takes watching a few games by progamers to know that hand-eye coordination is a large part of the mechanics needed to play SC2, and a misclick (not a misthought, just a mistake in clicking on the wrong pixel) can cost you the game. However, as an AI researcher myself, I am quite confident when I say that making a perfect micro bot is not the part that the AlphaStar researchers are interested in. You don't throw tons of supercomputing resources to make a perfect micro bot. They aren't interested in "winning" at starcraft per se. It's just that winning at starcraft is a good benchmark for how good they are at solving a specific type of problem. They are interested in the problems of planning and adapting a strategy hampered by "real world" limitations.



Show nested quote +

Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times,


I don't think this is an accurate account of the consensus, if there was any, at that time. Decades ago, it was actually a minority that correctly recognized that the brain is a machine like any other. And that in principle a machine could be build that does the same thing as a brain, only better. Respectable scientists for a long time placed the brain outside of any biological context. General principles of biology were not applied to it. Only with the rise of cognitive science did this change.

AI has gone through a number of "winters". The first of these was in the late 60s and 70s when it was clear that machines were not soon going to be "more intelligent" than humans despite early breakthroughs such as winning at backgammon or robots being able to correctly recognize simple objects and colors.

And you don't need to bring Cartesian duality in here, but if you do, there have been philosophers since the early 20th century who have questioned that duality, and the more we have learned about the brain, biology and particularly *computation*, the stronger the criticisms became. In particular, early AI researchers in the 60s didn't give two hoots about such arguments, and the Turing test as an evaluation tool for AI should make that clear. Note that the mind-brain duality argument is still not completely settled, although imho anybody arguing in favour of dualism is not understanding the concept of emergence.

The second AI winter was in the 90s and 00s, when it was clear that neural networks and expert machines *also* had serious limitations and despite early successes in visual object recognition and automated logical reasoning, there were still obvious gaps in what AIs could do. Deep learning has made AI, once again, reemerge from a winter. A cautious man would be hesitant to declare the problem will now be solved. In particular, things like abstract moral decision making and introspection are things that we don't really know how to do right now, and while deep learning looks a lot like a miracle, it is the same old neural networks we used in the 80s, but with more computing power and better optimization algorithms. Of course, I could also be describing a human brain...


But you are right that for the last decades it was just an issue of actually building a machine, because it proved to be quite challenging. Yes, it is true in some sense that just raw calculation wouldn't be enough. But it is very easy to calculate the phase space of Go and to then see that raw calculation was never going to solve that. And we have known for a long time that humans use pattern recognition properties of a neural network to play these games so well.

In fact, the opposite is true as people thought chess and go would be 'safe' from computers for a decade or two more than they actually were.

Show nested quote +
...as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task.


This is besides the point, but I beg to differ. Doing complex tasks is quite challenging for robots. It would be extremely challenging to build a robot that a human top rugby player could control using some VR interface (like in Avatar) that would allow for a similar level of play as the actual rugby player playing himself. We are decades off from that. But you were actually trying to make another point. So be careful with your language.

Sure, I don't really know how hard it is to build a robot that could play rugby. I'd argue that all you need is a remote control car with enough armored plating and horsepower, and a "ball catching, and holding mechanism". But it's beside the point. If you don't like the rugby tank, just stick to the racecar for "running" a sprint. It is an uninteresting problem. It becomes interesting when we add restrictions such as "the 100m dash must be run on 2 legs", because bipedal robotic running is something we still haven't solved adequately (although we are getting better at it).


Show nested quote +

But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.


So which one is it? Did we take way longer to solve these games? Or did we do it earlier than expected?

Both? You know I was talking about 60 years of history, with periods of unbridled optimism and AI winters of doom and gloom?


Show nested quote +

At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.


Perfect micro is an AI challenge. Not a 'how fast can I issue commands through an embedded systems interface'-challenge. That it is not the AI challenge most people are interested in, for the simple reason that it learns human players nothing new about the game, is besides the point.

It may be the case that in SC2, unlike in chess and go, an AI can play way way above the best humans without doing anything that humans hadn't realized or discovered themselves.

This all comes back to one important point. RTS games are games of execution and small scale decision making(tactics). They are not games of strategy. And their complexity is quite basis. There aren't layers upon layers that reshape how the game is played as you ascend the skill curve. Yes, the move space is huge and sparse, but in essence it is a straightforward game. Build an army stronger than your opponent, then force a fight and win the game. That's the entire game in a nutshell.


See above, I disagree. Mechanics are part of it, and the "least interesting" part from an AI perspective, but SC2 is definitely a game of strategy if you add limitations to the mechanics. The "build an army stronger than your opponent's and go and kill him with it" is a rather simplistic way of looking at it. I have no doubt that a completely perfectly executed blink stalker warp prism immortal rush "solves" the game if you allow 10000 APM (or so). And then you can definitely say that strategy is irrelevant, as the only thing to figure out is optimal movements on a map, which is a bit of a trivial problem. But if you limit the possible actions, you find that overall strategies become important, and it is not at all obvious what army is the strongest army and what is the best way to get there without just dying first. E.g. 3rd CC before Rax is sometimes possible, but straight up build order countered by plenty of early game aggression builds. But being a little bit more greedy than your opponent is generally a good strategy to get an advantage in the long term, and timing attacks exist to punish opponents exactly at moments when you expect them to be greedy and your aggression can punish them. 10k APM blink stalker micro would indeed thwart all these puny attacks, but it is irrelevant to SC as we understand the game, where strategy plays a real role. And it is exactly that part of the game that AlphaStar is designed to "solve", just as AlphaGo "solved" Go (a game where hand-eye coordination is mostly irrelevant).
alexanderzero
Profile Joined June 2008
United States659 Posts
Last Edited: 2019-07-29 16:20:10
July 29 2019 16:17 GMT
#208
Regarding AlphaStar's apparent lack of strategy I really do question whether or not its a problem with the scale/computing power of the neural network, or a design flaw. People say that AlphaStar doesn't have the ability to react to things but that's not exactly true. The decisions that it makes during battles are direct responses to the things done by the opponent, like flying its phoenixes around and picking off units that venture too far from the group, and then engaging fully once it has a large enough army advantage.

I know that people make this distinction between tactics and strategy, but this is an artificial boundary that exists in the minds of humans. There is nothing fundamental about the theory of the game that justifies this division. The fact that it is able to think tactically is evidence that there are aspects of the game that it does understand and have the capability to reason about. Presumably if it's capacity to reason was increased to include more variables, it would start considering thing like scouting and tech switches more often. That, and more training time to allow it to do more experiments and map out more of the game.
I am a tournament organizazer.
skdsk
Profile Joined February 2019
138 Posts
July 29 2019 16:51 GMT
#209
http://vod.afreecatv.com/PLAYER/STATION/46401370 vod of the alphastar cast event...
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 19:47:26
July 29 2019 19:35 GMT
#210
On July 29 2019 17:30 -Archangel- wrote:
Wasn't the point of this project to get AI that can solve problems? Having inhuman micro is not solving problems.


You have got to be fucking kidding!


It is like sending you to fight Superman. Superman will learn nothing beating your 1 000 000 times while all you might eventually do is somehow find kryptonite and beat him without it ever being a fair fight.


It is not about learning about SC2. It is about learning how to set up deep learning problems. And stop talking about fairness.
And the thing you hope AI will tell you about SC2 is very likely not there. People keep talking about the AI discovering new builds that humans can copy to become better. It is not going to happen because it is not relevant to high level AI play. An AI does not have the weakness that it wants to be 'clever'. And a deep learning AI will just relentlessly play the way it thinks is optimal.

There is not even a discussion that it it possible to find a hole in a deep learning AI. The AI is only as good as its training. Take the simple case of a 'Is it a cat or a dog' image recognition AI. If you provide an image of either a cat or dog in a very unusual pose, the AI might fail terribly, even though to us humans it is clearly a cat or dog. With any deep learning AI you can find input data where the AI will get it horribly wrong. But the point is that this is a tiny subset of the real input data where it fails, while for the vast majority of the input it does very well (and either outperforms humans overall or is most cost-efficient economy-wise even if humans are better). This is why when you watch the Alphago documentary, they were afraid of AlphaGo going 'delusional'.

A deep learning AI will not engage in mindgames and it will not cut corners and take risk on BOs in interesting ways. It either is fundamentally incapable of doing so, because it cares only about winning and not about being clever, because it isn't concluding anything or doing reasoning or deduction, because it has been trained playing other AIs, because it is a generalized algorithm that does the same thing for a specific game state, and because it isn't emotional or insecure. Or it won't because it is fundamentally suboptimal to play that way. And this makes sense because players like Flash also don't try to play a 'strategic' game. The AI just presents its best play and if that is not good enough it will stubbornly lose without adapting. Humans have insecurities and feel the need to outsmart their opponent. They want to do something to get an edge. They fear their opponent tricking them. They fear that playing straight up they will lose. They feel that in this match they need to do something that will guarantee them the win. A human will not be satisfied with a 51% win chance. It will try to come up with something to do better. The AI doesn't care. Hence, the AI has no need for doing marginal plays that may result into huge rewards. It will simply not explore that part of the phase space, even if there are pockets there that are really good, because overall this is a losing part of the phase space. The AI will converge in a smooth and consistent part of the phase space where it is easy to move into better versions of itself, as the network is being trained.
Sadistx
Profile Blog Joined February 2009
Zimbabwe5568 Posts
July 31 2019 06:10 GMT
#211
If there's anything I learned from Deep AI projects (including the Texas Hold'em NL 6-max poker AI released recently), is that AI optimizes for unexploitability, which in the context of SC2 is for least risky strategies. I believe the term used is 'Regret minimization'. It seems logical.

That it achieves a win rate of above 50% while doing this is just a side effect of what it optimizes for.

I'm honestly not particularly educated in this field, though, so correct me if what I typed is nonsense!
Acrofales
Profile Joined August 2010
Spain18215 Posts
July 31 2019 08:13 GMT
#212
On July 31 2019 15:10 Sadistx wrote:
If there's anything I learned from Deep AI projects (including the Texas Hold'em NL 6-max poker AI released recently), is that AI optimizes for unexploitability, which in the context of SC2 is for least risky strategies. I believe the term used is 'Regret minimization'. It seems logical.

That it achieves a win rate of above 50% while doing this is just a side effect of what it optimizes for.

I'm honestly not particularly educated in this field, though, so correct me if what I typed is nonsense!

Actually it maxes its reward function. You can definitely do regret minimization by building that into the reward function (or the optimization algorithm), but there's no reason to assume that was applied. In a game with almost rock-paper-scissors like strategies, and the bots trained by adversarial games, I'm not even sure what to look for to distinguish a bot with regret minimization and without.
Equalizer
Profile Joined April 2010
Canada115 Posts
July 31 2019 16:36 GMT
#213
At least according to Deepmind's blog post (https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) they trained using a mixture over agent strategies using the game theory concept of the Nash equilibrium.

The basic point is that even though it may play a strategy that has a hard countered it should at randomly choose other strategies that would do well against this counter some of the time. I suppose this makes the most sense for openings but after that perhaps not so much.

What is odd is that in the games identified to almost certainly be against AlphaStar seems to have very little randomness so they may of just chosen the agent with the highest win rate for real world testing.
The person who says it cannot be done, should not interrupt the person doing it.
DimmuKlok
Profile Joined June 2010
United States225 Posts
July 31 2019 16:54 GMT
#214
How does AlphaStar deal with cloaked units? Cloaked units are technically visible but rely on the human element to not be detected.
Acrofales
Profile Joined August 2010
Spain18215 Posts
August 01 2019 09:49 GMT
#215
On August 01 2019 01:54 DimmuKlok wrote:
How does AlphaStar deal with cloaked units? Cloaked units are technically visible but rely on the human element to not be detected.

That depends on the API, but insofar as I know, that is deterministic, so if the AI is looking at the right part of the map, it will "see" the cloaked units. Whether it reacts is then part of AlphaStar. That, in turn, is heavily dependent on whether this situation occurred sufficiently often with enough salience to train a counter.

If you recall the showmatches, it reacted instantly and decisively when DTs appeared, but that was with full map vision. With only a "screen" sized area visible at any time, it may not have trained enough with that. Or maybe it did and reacts well?
Prev 1 9 10 11 All
Please log in or register to reply.
Live Events Refresh
AI Arena Tournament
20:00
Swiss - Final Day
Laughngamez YouTube
RSL Revival
18:00
S4 Europe Server Qualifier
IndyStarCraft 332
LiquipediaDiscussion
PSISTORM Gaming Misc
16:55
FSL TeamLeague S10: ASH vs POG
Freeedom20
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
IndyStarCraft 332
SteadfastSC 319
BRAT_OK 86
mouzHeroMarine 27
StarCraft: Brood War
Calm 2013
nyoken 56
NaDa 6
Dota 2
febbydoto12
LuMiX1
Counter-Strike
fl0m4402
Dendi398
Super Smash Bros
Mew2King106
Heroes of the Storm
Khaldor546
MindelVK12
Other Games
gofns27440
tarik_tv5502
Grubby5363
FrodaN3935
Mlord587
KnowMe462
ToD254
Liquid`Hasu207
summit1g131
Trikslyr82
Organizations
Counter-Strike
PGL24443
Other Games
EGCTV1523
gamesdonequick1217
StarCraft 2
angryscii 29
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• printf 70
• Airneanach16
• sooper7s
• AfreecaTV YouTube
• intothetv
• Migwel
• Kozan
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• FirePhoenix2
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• lizZardDota237
League of Legends
• Nemesis6372
Other Games
• imaqtpie1978
• Shiphtur538
Upcoming Events
Replay Cast
3h 36m
Sparkling Tuna Cup
13h 36m
LiuLi Cup
14h 36m
Maru vs Reynor
Serral vs Rogue
Ladder Legends
21h 36m
Replay Cast
1d 3h
Replay Cast
1d 12h
Wardi Open
1d 15h
Monday Night Weeklies
1d 20h
OSC
2 days
WardiTV Winter Champion…
2 days
[ Show More ]
Replay Cast
3 days
WardiTV Winter Champion…
3 days
Replay Cast
4 days
PiG Sty Festival
4 days
The PondCast
4 days
KCM Race Survival
4 days
WardiTV Winter Champion…
4 days
Replay Cast
5 days
PiG Sty Festival
5 days
Epic.LAN
5 days
Replay Cast
6 days
PiG Sty Festival
6 days
CranKy Ducklings
6 days
Epic.LAN
6 days
Liquipedia Results

Completed

Escore Tournament S1: W8
Rongyi Cup S3
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
LiuLi Cup: 2025 Grand Finals
Nations Cup 2026
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
eXTREMESLAND 2025
SL Budapest Major 2025

Upcoming

Escore Tournament S1: King of Kings
[S:21] ASL SEASON OPEN 1st Round
[S:21] ASL SEASON OPEN 1st Round Qualifier
[S:21] ASL SEASON OPEN 2nd Round
[S:21] ASL SEASON OPEN 2nd Round Qualifier
Acropolis #4
IPSL Spring 2026
HSC XXIX
uThermal 2v2 2026 Main Event
Bellum Gens Elite Stara Zagora 2026
RSL Revival: Season 4
WardiTV Winter 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
FISSURE Playground #3
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.