• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 08:31
CEST 14:31
KST 21:31
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 1 - Final Week6[ASL19] Finals Recap: Standing Tall12HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0
Community News
Esports World Cup 2025 - Brackets Revealed14Weekly Cups (July 7-13): Classic continues to roll6Team TLMC #5 - Submission extension3Firefly given lifetime ban by ESIC following match-fixing investigation17$25,000 Streamerzone StarCraft Pro Series announced7
StarCraft 2
General
The GOAT ranking of GOAT rankings Who will win EWC 2025? RSL Revival patreon money discussion thread Weekly Cups (July 7-13): Classic continues to roll Esports World Cup 2025 - Final Player Roster
Tourneys
FEL Cracov 2025 (July 27) - $8000 live event Sea Duckling Open (Global, Bronze-Diamond) RSL: Revival, a new crowdfunded tournament series $5,100+ SEL Season 2 Championship (SC: Evo) WardiTV Mondays
Strategy
How did i lose this ZvP, whats the proper response
Custom Maps
External Content
Mutation # 482 Wheel of Misfortune Mutation # 481 Fear and Lava Mutation # 480 Moths to the Flame Mutation # 479 Worn Out Welcome
Brood War
General
Flash Announces (and Retracts) Hiatus From ASL BW General Discussion Help: rep cant save ASL20 Preliminary Maps BGH Auto Balance -> http://bghmmr.eu/
Tourneys
Cosmonarchy Pro Showmatches CSL Xiamen International Invitational [Megathread] Daily Proleagues [BSL20] Non-Korean Championship 4x BSL + 4x China
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Stormgate/Frost Giant Megathread Path of Exile Nintendo Switch Thread CCLP - Command & Conquer League Project The PlayStation 5
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine Stop Killing Games - European Citizens Initiative Summer Games Done Quick 2025!
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread [\m/] Heavy Metal Thread
Sports
Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 2024 - 2025 Football Thread NBA General Discussion NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Men Take Risks, Women Win Ga…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 916 users

AlphaStar released: Deepmind Research on Ladder - Page 10

Forum Index > SC2 General
214 CommentsPost a Reply
Prev 1 8 9 10 11 Next All
necrosexy
Profile Joined March 2011
451 Posts
Last Edited: 2019-07-28 20:07:31
July 28 2019 20:07 GMT
#181
On July 28 2019 22:28 Goolpsy wrote:
The purpose of the AI research is not to "beat humans" or "accomplish task X". We've done that..

It's essentially to be able to make a self-learning AI that can "solve problems".
Why Self-learning? --> Because there are problems we humans don't even understand (or haveexperience in yet), and we'd hope the AI would be able to solve it.
(I am not talking about SKYNET here).

The problem is always, that you need something to measure your product against. How good is it? When should we stop? How much power is required to train it?
Chess and Go were good challenges, because the games themselves are simple with perfect information; AND humans are amazing at them. At the same time, the game have enough possible variations that it is not solvable by a brute-force approach.

Imagine using AI's for selfdriving cars; Driving is easy. But what if a moron drives too close to you? or the car in front drives in the middle of the road? or in your side of the road? What if a deer runs in front of your car. What if you get hit by a bird.
Imagine you get hit by a bird and the AI goes like this: "uhuh weird sensor reading, unexpected error, abort abort .. " and drives off the road.
or you program it to ignore such, and it hits a person: "weird sensor reading.. oh well nevermind"

Back to start Starcraft AI; Humans are really good at starcraft, Obviously macro and micro helps a lot, but our main strength is being able to solve problems (or attempt to solve problems).
Starcraft is a complex game with imperfect information and many many many problems to continually solve.

This is why it is interesting to test the AI's against humans in this area. We are sufficiently good at the game, to be worth competing against (for problems solving skills). Here we have a measure of "how good did we actually become".
Winning with 1500eAPM stalker micro is not solving problems.
Figuring out what to do against an opponent who stalker rushes you and then goes "mass void rays" IS.

I think much of the "disappointment" many are feeling, is not that the AI is unbeatable.. but that they're so EASILY "abusable".
It doesnt understand what AIR is, or where it is (bile drops).
It doesnt understand that turrets can have upgraded range
It doesnt know what a Widowmine is - even when its visible

As for worker scouting; it is not necesarily important (if you are doing aggressive strategies, you are getting information all the time and you can infer A LOT from it).
But not scouting at all and doing a blind build and not adapting to what it eventually sees, is not problem solving :/

It is "funny" however, because we humans think scouting is the EASIEST way to solve the problem of "what to build" and "when to build",
so it's amazing that the AI is still so 'dumb' and 'unrefined' and still doesn't use this "easy" way of overcoming that obstacle.

it hasn't "beat humans"
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 21:05:16
July 28 2019 20:20 GMT
#182
On July 29 2019 04:12 Cyro wrote:
Show nested quote +
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.


That's what I am referring to. Not that sure what happened with Go because I am not that tuned in with the community there. But in chess, if Deepmind started working with a select few chess players, those chess players would gain a huge advantage. Engine analysis is super crucial to your play. So chess is actually being influenced (damaged) by chess engines/AI. The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.

So I would prepare for this in the RTS community.
NinjaNight
Profile Joined January 2018
428 Posts
July 28 2019 20:25 GMT
#183
On July 29 2019 05:20 Muliphein wrote:
Show nested quote +
On July 29 2019 04:12 Cyro wrote:
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.


The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.



What? How do you come up with this claim?
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-28 21:04:40
July 28 2019 20:59 GMT
#184
RTS are games of execution. In chess, there are positions that are objectively winning but the win is really hard if not impossible to find (doesn't matter if you mean human or engine/AI). In SC2, this doesn't happen. It is straightforward to count economic input and to count army strength (and you assume they perform optimal in a battle).

There are situations that are like bifurcations/double edged, like a base trade scenario. There it can remain unclear what is the right call for a long time, until it has completely unfolded. But in general, in SC2, things are 1 dimensional. In SC BW, things are a bit different and more complicated because things are more positional. People have understood this for a long time, which is why we had the debate about automation when SC2 was announced (and we all know which side was vindicated). SC2 is a game with less strategy and less demands on execution and this was by design.

And the second reason is the very strong AI we have right now. It beats top players. How well it beats them and how well humans can exploit general AI trends (rather than finding a blind spot in a specific AI and exploiting that) is an open question. And it does so in a boring straightforward manner.

So this AI research seems to support these views we in the community already had about the nature of RTS games and the nature of SC2 itself.
AttackZerg
Profile Blog Joined January 2003
United States7454 Posts
July 28 2019 21:09 GMT
#185
On July 29 2019 04:12 Cyro wrote:
Show nested quote +
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.


I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.

Not just that. They withheld games, only originally shared a few wins and they put Stockfish on 1 move a minute, which extremely handicaps it from making deep enough calculations. And they played the equivalent of a super computer versus a good desktop. The games were beautiful. The setup... completely unscientific.

They later corrected this was a 1k game match on comparable hardware. Stockfish did better (5 or 9 wins) but still got stomped.

Just remembered, they did this ladder approach on the Go chinese server before throat punching the S.korean world champion.

Unlike chess, Go did not have a computer overlord until alphago.

Maybe after rustling so many feathers in other communities has caused them to listen more. Who knows.

For anyone from the project reading - anything I say that seems critical it is because I am a big fan of the project and I am enthusiastic for the work you do.

Exciting times.
Inrau
Profile Joined June 2019
35 Posts
July 28 2019 23:22 GMT
#186
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.
Muliphein
Profile Joined July 2019
49 Posts
July 28 2019 23:47 GMT
#187
On July 29 2019 08:22 Inrau wrote:
Show nested quote +
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)
Inrau
Profile Joined June 2019
35 Posts
July 28 2019 23:59 GMT
#188
On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)

Because having a mouse trail is part of playing starcraft. What I am saying is that AlphaStar is playing essentially with 3 keyboards and mice. If we were playing an XBOX RTS where you didn't micro and only moved the camera around with preset controller actions commanding the squads, I would buy your argument.

And I think your smartass comment about massing stalkers, forgot that over time players adapt and defended 4 gates by squeezing out an immortal, or whatever the meta changes to. If the game was so simple, AlphaStar would have already found the exact build and rolled over everyone. But because the game is so complex and massive, they have to potty train the AI to act like humans, because without it, it cant function.
cha0
Profile Joined March 2010
Canada504 Posts
July 29 2019 00:21 GMT
#189
You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.

On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)

Muliphein
Profile Joined July 2019
49 Posts
July 29 2019 00:24 GMT
#190
So Alphastar is not truly playing SC2 because it isn't using a (virtual?) keyboard? If you want to hold that position; fine. But I think it would be a waste of time to debate against that.

So the AI is doing something that resembles playing SC2 and in doing so it is solving an open AI problem.

You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition? That's not at all what I see. I see it play straightforward games strategically and I see that while often wins, it is still making mistakes in microing and engaging battles. But that most of the time, micro, macro, and deciding when to fight are superior to that of its human opponents so that it mostly wins anyway. And in that the games it loses, the human player is able to find a weakness or blind spot and exploit it, leaving the AI to repeat the same mistake over and over again.

Yes, SC2 is a game with a huge game state and input space. And that causes problems for machine learning. Which is why it is meaningful that Deepmind is able find a way to beat strong humans (and why it doesn't matter that the AI looks stupid or exploitable as long as it is winning.) But this complexity (it is not actual 'complexity', it is complicated in having a huge phase space. Complexity is when a small change can completely upturn an outcome and that is rarely the case in RTS) you speak of and 'outhinking the human player using superior strategy humans were unable to conceive' are completely disconnected.

The actual issue is if the style of play it has right now can be streamlined to beat the top players. Or if neural networks are fundamentally incapable of outplaying humans because of a teachnical limitation (for example treating the game essentially as a Markov chain, ignoring the game history).
Muliphein
Profile Joined July 2019
49 Posts
July 29 2019 00:35 GMT
#191
On July 29 2019 09:21 cha0 wrote:
You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.



How can you say something like this after I said that there cannot possibly be such a thing as fairness in humans Vs AI.

But you do admit that you think the way an AI plays SC2 isn't really interesting to you. Why do people have this strange idea? There is a reason why in general people avoid using chess engines while comnentating chess game. What the engine sees is usually completely irrelevant for what is going to happen on the board exactly because the AI plays in a way humans cannot emulate. And the AI move suggestion also tell you nothing about the strategic themes in the game.

So your argumentum as absurdum is exactly the state of AI in chess.

Then you end your post with an utterly false statement. Yes, in principle you could. But no one has because it is extremely difficult. You act as if Alphastar does something all AI always already capable of while claiming it will teach us new things about the game. Did you even read my posts? This is exactly the misunderstanding I argued against before you replied.
Xain0n
Profile Joined November 2018
Italy3963 Posts
Last Edited: 2019-07-29 00:49:14
July 29 2019 00:41 GMT
#192
On July 29 2019 08:47 Muliphein wrote:
Show nested quote +
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?
Antisocialmunky
Profile Blog Joined March 2010
United States5912 Posts
July 29 2019 01:26 GMT
#193
I love AlphaDepot micro where it blocks its own units out. I wonder if it thinks that depots are a good way of making a jail for the enemy army or something.
[゚n゚] SSSSssssssSSsss ¯\_(ツ)_/¯
Marine/Raven Guide:http://www.teamliquid.net/forum/viewmessage.php?topic_id=163605
Inrau
Profile Joined June 2019
35 Posts
Last Edited: 2019-07-29 02:33:22
July 29 2019 02:32 GMT
#194
On July 29 2019 09:24 Muliphein wrote:
You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition?

That is correct. It has no idea what to do besides learning the builds from other players and microing "perfectly." Alphastar would get wrecked if players played against it over and over and over like some sort of INSANE AI challenge. I see nothing special at this point.
EDIT: Even with the advantage alphastar has APM / vision wise.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 02:56:29
July 29 2019 02:53 GMT
#195
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


On July 29 2019 09:41 Xain0n wrote:
Show nested quote +
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.


In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.


Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.


After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.
loft
Profile Joined July 2009
United States344 Posts
July 29 2019 06:20 GMT
#196
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar. Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.
terribleplayer1
Profile Joined July 2018
95 Posts
July 29 2019 06:30 GMT
#197
Well, even with how inhuman it is, it's going to lose a lot more once people realize it's an opponent that doesnt scout.
Muliphein
Profile Joined July 2019
49 Posts
Last Edited: 2019-07-29 06:49:10
July 29 2019 06:43 GMT
#198
On July 29 2019 15:20 loft wrote:
Show nested quote +
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar.


There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness.


Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.


This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar.

But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent."

And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well.
-Archangel-
Profile Joined May 2010
Croatia7457 Posts
Last Edited: 2019-07-29 08:30:45
July 29 2019 08:30 GMT
#199
On July 29 2019 15:43 Muliphein wrote:
Show nested quote +
On July 29 2019 15:20 loft wrote:
On July 29 2019 11:53 Muliphein wrote:


SC2 isn't a game of strategy. It is a game decision making and execution.


lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.

The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar.


There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness.

Show nested quote +

Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.


This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar.

But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent."

And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well.

Wasn't the point of this project to get AI that can solve problems? Having inhuman micro is not solving problems.

It is like sending you to fight Superman. Superman will learn nothing beating your 1 000 000 times while all you might eventually do is somehow find kryptonite and beat him without it ever being a fair fight.
Poopi
Profile Blog Joined November 2010
France12793 Posts
July 29 2019 08:57 GMT
#200
On July 29 2019 11:53 Muliphein wrote:
But clearly it is making a lot of mistakes in the micro and battle engage department.

And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.


Show nested quote +
On July 29 2019 09:41 Xain0n wrote:
On July 29 2019 08:47 Muliphein wrote:
On July 29 2019 08:22 Inrau wrote:
On July 29 2019 03:19 Muliphein wrote:
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.


AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
[image loading]

The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.

The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.


Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?

All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.

Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)


If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.


So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.

Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.

Show nested quote +

In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.


And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.

Show nested quote +

Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.


SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.

Show nested quote +

After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?


No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.

Why are there so many low count posts acting superior while spilling semi bs about how AI works on these DeepMind threads? That was the same on the other thread.

I’m pretty sure (idk if it’s that way for these ladder agents tho) that AlphaStar used imitation learning at the beginning so it indeed used human replays, not only self play. That was why they guessed it spammed clicks because of humans doing so.
WriterMaru
Prev 1 8 9 10 11 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 29m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Harstem 444
Lowko327
Nina 230
StarCraft: Brood War
Britney 29698
firebathero 4623
Sea 3248
Larva 804
Mind 789
Stork 633
EffOrt 623
JYJ342
Light 335
Pusan 284
[ Show more ]
Last 271
Snow 267
Hyun 234
TY 162
Zeus 152
PianO 111
ToSsGirL 102
Mini 83
Backho 67
Barracks 58
Sharp 58
sSak 45
Sacsri 37
Rush 35
Icarus 27
scan(afreeca) 22
GoRush 17
ajuk12(nOOB) 17
Noble 11
Shine 9
IntoTheRainbow 9
SilentControl 7
Bale 6
Hm[arnc] 2
Dota 2
Gorgc9411
singsing2638
Counter-Strike
x6flipin656
byalli267
sgares251
Other Games
B2W.Neo1177
DeMusliM363
Fuzer 303
hiko225
SortOf123
Mew2King56
Pyrionflax55
Trikslyr24
Organizations
Other Games
gamesdonequick2510
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Nemesis2797
• Jankos1050
Upcoming Events
OSC
29m
WardiTV European League
3h 29m
Fjant vs Babymarine
Mixu vs HiGhDrA
Gerald vs ArT
goblin vs MaNa
Jumy vs YoungYakov
Replay Cast
11h 29m
OSC
11h 29m
Epic.LAN
23h 29m
CranKy Ducklings
1d 21h
Epic.LAN
1d 23h
CSO Contender
2 days
BSL20 Non-Korean Champi…
2 days
Bonyth vs Sziky
Dewalt vs Hawk
Hawk vs QiaoGege
Sziky vs Dewalt
Mihu vs Bonyth
Zhanhun vs QiaoGege
QiaoGege vs Fengzi
Sparkling Tuna Cup
2 days
[ Show More ]
Online Event
3 days
BSL20 Non-Korean Champi…
3 days
Bonyth vs Zhanhun
Dewalt vs Mihu
Hawk vs Sziky
Sziky vs QiaoGege
Mihu vs Hawk
Zhanhun vs Dewalt
Fengzi vs Bonyth
Esports World Cup
4 days
ByuN vs Astrea
Lambo vs HeRoMaRinE
Clem vs TBD
Solar vs Zoun
SHIN vs Reynor
Maru vs TriGGeR
herO vs Lancer
Cure vs ShoWTimE
Esports World Cup
5 days
Esports World Cup
6 days
Liquipedia Results

Completed

JPL Season 2
RSL Revival: Season 1
Murky Cup #2

Ongoing

BSL 2v2 Season 3
Copa Latinoamericana 4
Jiahua Invitational
BSL20 Non-Korean Championship
Championship of Russia 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters

Upcoming

CSL Xiamen Invitational
CSL Xiamen Invitational: ShowMatche
2025 ACS Season 2
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
BSL Season 21
K-Championship
RSL Revival: Season 2
SEL Season 2 Championship
uThermal 2v2 Main Event
FEL Cracov 2025
Esports World Cup 2025
Underdog Cup #2
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.