• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 06:34
CEST 12:34
KST 19:34
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Maestros of the Game: Week 1/Play-in Preview6[ASL20] Ro24 Preview Pt2: Take-Off7[ASL20] Ro24 Preview Pt1: Runway132v2 & SC: Evo Complete: Weekend Double Feature4Team Liquid Map Contest #21 - Presented by Monster Energy9
Community News
Weekly Cups (August 25-31): Clem's Last Straw?0Weekly Cups (Aug 18-24): herO dethrones MaxPax6Maestros of The Game—$20k event w/ live finals in Paris44Weekly Cups (Aug 11-17): MaxPax triples again!14Weekly Cups (Aug 4-10): MaxPax wins a triple6
StarCraft 2
General
Weekly Cups (Aug 11-17): MaxPax triples again! Maestros of the Game: Week 1/Play-in Preview Weekly Cups (August 25-31): Clem's Last Straw? 2024/25 Off-Season Roster Moves #2: Serral - Greatest Players of All Time
Tourneys
Maestros of The Game—$20k event w/ live finals in Paris 🏆 GTL Season 2 – StarCraft II Team League LiuLi Cup - September 2025 Tournaments $5,100+ SEL Season 2 Championship (SC: Evo) Kirktown Chat Brawl #8 - 4.6K max Tonight
Strategy
Custom Maps
External Content
Mutation # 489 Bannable Offense Mutation # 488 What Goes Around Mutation # 487 Think Fast Mutation # 486 Watch the Skies
Brood War
General
Post ASL20 Ro24 discussion. Starcraft at lower levels TvP BGH Auto Balance -> http://bghmmr.eu/ Easiest luckies way to get out of Asl groups BW General Discussion
Tourneys
[ASL20] Ro24 Group F [IPSL] CSLAN Review and CSLPRO Reimagined! Small VOD Thread 2.0 Cosmonarchy Pro Showmatches
Strategy
Simple Questions, Simple Answers Muta micro map competition Fighting Spirit mining rates [G] Mineral Boosting
Other Games
General Games
General RTS Discussion Thread Nintendo Switch Thread Path of Exile Warcraft III: The Frozen Throne Mechabellum
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread YouTube Thread Things Aren’t Peaceful in Palestine Canadian Politics Mega-thread
Fan Clubs
The Happy Fan Club!
Media & Entertainment
Anime Discussion Thread Movie Discussion! [Manga] One Piece [\m/] Heavy Metal Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread High temperatures on bridge(s) Gtx660 graphics card replacement
TL Community
The Automated Ban List TeamLiquid Team Shirt On Sale
Blogs
hello world
radishsoup
Lemme tell you a thing o…
JoinTheRain
How Culture and Conflict Imp…
TrAiDoS
RTS Design in Hypercoven
a11
Evil Gacha Games and the…
ffswowsucks
INDEPENDIENTE LA CTM
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1331 users

AlphaStar AI goes 10-1 against human pros in demonstration…

Forum Index > SC2 General
374 CommentsPost a Reply
Prev 1 15 16 17 18 19 Next All
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:32 GMT
#321
On February 01 2019 23:46 Dangermousecatdog wrote:
Show nested quote +
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.
Nebuchad
Profile Blog Joined December 2012
Switzerland12217 Posts
February 01 2019 17:38 GMT
#322
On February 02 2019 01:16 Acrofales wrote:
Show nested quote +
On February 01 2019 19:24 Nebuchad wrote:
I have two main issues with this whole thing.

1) It's pretty clear that control plays a bigger role than decision making in those wins. Continuing to make mass stalkers against 8 immortals is not good decision making. Having your whole army go back to the back of your base 4 times to deal with a warp prism while you could just cross the map and go fucking kill him instead is not good decision making. I think it looks especially bad because it's bad decision making in a way that is somewhat obvious, like, very few humans would make those bad decisions. We are used to "bad decision making" that is much subtler than that.

2) I don't like the PR strategy of DeepMind. It seems like they have to hype the fuck out of the accomplishments that they get, and it makes the whole thing seem really artificial to me. I don't have the exact quotes in mind any more but what they said about this starcraft experience felt overreaching when I read it; what they said about the poker experience was even worse, but the poker experience was somewhat more convincing than the starcraft one (it had issues as well).

edit: my mistake, just realized Libratus wasn't made by the same guys. But the same principle applies to both.

I don't think you can claim that making stalkers is bad decisionmaking at all. On paper, immortals hard counter stalkers. And in human control they do too. But if you have Alphastar micro capabilities, then suddenly they don't anymore. I think you're mixing cause and effect a bit here. Alphastar learned to make stalkers in most situations *because* it also learned to micro them incredibly well. That seems like a legitimate strategy. It's like when MKP showed that if you split your marines they didn't just get blasted into goo by a couple of banelings, and if you did it well, then suddenly banelings no longer countered marines very well at all. Sure, he microd marines FAR better than his contemporaries, but was his choice to then just make lots of marines a bad choice? Clearly not.

As for (2). They are a commercial enterprise. Of course they're going to hype their accomplishments. What did you think? That said, if you actually watched the video, the guys there are quite honest about their achievements and their aspirations. I don't think they believe they have "solved SC2". Or poker, for that matter, although I suspect poker is pretty close to being solved in all its various forms, whereas SC2 will take a bit longer. Still, Alphastar is quite a remarkable achievement, even with its flaws, and they are justifiably proud of it.


There comes a point where it wouldn't have worked though. I don't know how many immortals are required, if it's 10 or 14, but at some point Alphastar would have still lost. Mana perceived that the point was 8 because he was used to playing human stalkers, and so he was on the map with 8 immortals thinking he was safe when he wasn't. At some point he would have been safe.

I didn't put 2) in there because I find it particularly surprising, but because it makes me ask myself more questions about the whole enterprise than I would if they made their commentary more fair and analytical.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:42 GMT
#323
Well, apparently the internal agents favoured stalkers naturally. Maybe those that were given the artificial incentive build immortals were beating those stalker-heavy agents. We don't know for sure. But apparently there is a downside to making a lot of immortals when most other agents are heavy on stalkers. Maybe they would mostly lose against any other agent not making stalkers.

If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.
Poopi
Profile Blog Joined November 2010
France12886 Posts
February 01 2019 17:45 GMT
#324
So I guess this is Rodya third account?

I find it annoying that we could not see more games with the camera interface once MaNa finally won. They could have cut the off race TLO games and played more live matches.
WriterMaru
Haukinger
Profile Joined June 2012
Germany131 Posts
February 01 2019 17:45 GMT
#325
On February 02 2019 02:42 Polypoetes wrote:
If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.


This. That's exactly the point. As long as the game allows for insane apm, one cannot blame an AI for using it.
Nebuchad
Profile Blog Joined December 2012
Switzerland12217 Posts
Last Edited: 2019-02-01 17:56:08
February 01 2019 17:55 GMT
#326
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
Last Edited: 2019-02-01 18:57:11
February 01 2019 18:49 GMT
#327
On February 02 2019 02:55 Nebuchad wrote:
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.


A neural network does the same amount of calculations with random weights as it does with the weights it found through '200 years of gameplay'. So there you already go wrong.

Secondly, that AI was different from the AI that went 0-5 against Mana. So don't judge those AI's by the different AI in the last game.

Third, we saw AlphaGo become 'delusional' after it played very strongly. These kinds of blind spots and failures are natural for neural networks, because no amount of training can ever prepare a NN completely for any test input. If you want an AI that succeeds 99.99 of the time, then don use a NN. Yet despite it's delusions, AlphaGo was stronger than Lee Sedol. So once the NN goes wrong and is losing, you cannot judge it's strengths on what it is doing then. Korean commentators were literally laughing AlphaGo off stage. If you watch the AlphaGo documentary, which you probably have when you are debating here, you already know what I mean.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 20:13:18
February 01 2019 19:47 GMT
#328
On February 01 2019 18:12 maybenexttime wrote:
@pvsnp

Circumvent means to go around.


Really? I had no idea. Thanks for the pretentious dictionary copypaste.


AlphaStar makes this aspect of the game relatively irrelevant. The same way having a team of aimboters in CS:GO makes tactics irrelevant. The fact that the aimboters dominate any fight when they choose to engage doesn't make them tactically superior.


And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Just using the word "superior" betrays your lack of understanding. Superior to a completely arbitrary performance bechmark? Is Mana the only progamer out there? Change the benchmark and it's inferior, or superior, or whatever. Whether it happens to be superior or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.


The games MaNa lost were rigged in many ways when it comes to the engagements. First of all, there was a vast gap in terms of mechanics between MaNa and AlphaStar - both in terms of battle awareness (not being limited to one screen in case of the AI) and superhuman APM peaks. Secondly, MaNa's experience worked against him. He admitted that he misjudged many engagements due to not being used to playing opponents with such mechanics. Before each battle MaNa overestimated his chances whereas AlphaStar underestimated its chances.


And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion. Does it trivialize the problem if the decision is always to use superhuman micro in battle? Perhaps, to some degree. But to claim that the entire game as an incomplete information environment, from production to scouting to harass and finally to battle, is trivialized merely because superhuman micro is within the action set, is idiocy.

Any attention AlphaStar pays towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely incidental. Or PR driven. This is a technical project with technical goals, and Starcraft is simply the vehicle of choice. If Starcraft had zero progamers and zero support, the only things AlphaStar would lose are a useful (but nonessential) performance benchmark and a PR opportunity.

Bluntly put, everything you're so busy preaching about doesn't matter. Go play Starcraft and leave AI to the professionals.
Denominator of the Universe
TL+ Member
maybenexttime
Profile Blog Joined November 2006
Poland5607 Posts
Last Edited: 2019-02-01 20:38:06
February 01 2019 20:34 GMT
#329
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.

And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...

Anyway, not wasting my time any further with you.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2019-02-01 22:34:27
February 01 2019 21:21 GMT
#330
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 23:00:00
February 01 2019 22:29 GMT
#331
On February 02 2019 05:34 maybenexttime wrote:
Show nested quote +
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.


I was making the claim that you're missing the point, and everything I've heard since only reinforces that.


Show nested quote +
And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...


A terrible analogy that is either ignorant or disingenuous, given what you said earlier about circumventing the point. If the point is decisionmaking then deciding to use superhuman micro works just fine. Why would anything about human blink skill factor in? What bearing does that have on optimal decisionmaking from limited information?

Decisions on where and how to blink with superhuman micro don't matter in the context of winning the game. They matter in the context of choosing optimally given limited knowledge about current game state. Guess which one Deepmind cares about?


Anyway, not wasting my time any further with you.


Good to hear, correcting you was getting tiresome.

On February 02 2019 06:21 Grumbels wrote:
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.


I can see how you got to your conclusion that I have a very low and/or cynical opinion of Deepmind/Google. Especially since that's not exactly an uncommon opinion. But you've got it totally backwards. I love Deepmind and Google. I think AlphaStar is both technically interesting and very entertaining.

What annoys me is seeing all the prejudice surrounding AlphaStar, the misconceptions on how ML works, and the general ignorance about anything technical. Of course there are legitimate criticisms to be levelled at the way Deepmind has approached Starcraft with AlphaStar. But it's annoying, to say the least, when laymen pretend at expertise.

Google and Deepmind are doing Starcraft a favor by bringing so much attention. And yet so many people react by immediately attacking them and their work, in many cases without the slightest understanding of the technical aspects involved.
Denominator of the Universe
TL+ Member
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 00:32 GMT
#332
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.
IMBA IMBA IMBA IMBA IMBA IMBA
Athinira
Profile Joined August 2011
Denmark33 Posts
Last Edited: 2019-02-02 15:55:10
February 02 2019 15:53 GMT
#333
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.
"Science Vessel much? Yeah, i think so!" - Tasteless, 2008
Poopi
Profile Blog Joined November 2010
France12886 Posts
February 02 2019 16:00 GMT
#334
It’s actually a pretty big problem if AIs win due to outright superior mechanics because it makes the game a far easier problem to solve.
If you wanna push the field of reinforcement learning using a strategy game that relies on mechanics as well as strategy, you need a relatively fair fight to do so.
WriterMaru
Dangermousecatdog
Profile Joined December 2010
United Kingdom7084 Posts
February 02 2019 16:40 GMT
#335
On February 02 2019 02:32 Polypoetes wrote:
Show nested quote +
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.
Ej_
Profile Blog Joined January 2013
47656 Posts
February 02 2019 16:52 GMT
#336
On February 03 2019 01:40 Dangermousecatdog wrote:
Show nested quote +
On February 02 2019 02:32 Polypoetes wrote:
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.

I think that would be you here.
"Technically the dictionary has zero authority on the meaning or words" - Rodya
Nebuchad
Profile Blog Joined December 2012
Switzerland12217 Posts
February 02 2019 16:54 GMT
#337
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second


It obviously isn't. There are no mechanics in chess, what we're testing is the decision making. This is also what we should be expecting to be tested in Starcraft. It's not impressive to know that a program functions quicker than a hand, anyone could have told you that.
No will to live, no wish to die
perturbaitor
Profile Joined April 2015
2 Posts
February 02 2019 17:56 GMT
#338
inside this topic
two kinds of blindness exist
state and direction
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 18:41 GMT
#339
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.


Why do you think they limited the APM of the engine at all then? In the interview they said that they are happy if new strategies emerge from the AI and humans can learn something from it for their own game. This goal is inconsistent with the AI having large APM. My criticism is that they did not go far enough to ensure fair play.

Furthermore, I said that I was disappointed in it because we can't learn anything about strategy. I didn't say anything about fair competition. Microbots just aren't that fun to look at.
IMBA IMBA IMBA IMBA IMBA IMBA
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
February 02 2019 18:43 GMT
#340
On February 03 2019 02:56 perturbaitor wrote:
inside this topic
two kinds of blindness exist
state and direction

cool haiku but what does it mean?

1 post to be snarky in this thread are you polypoetes?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Prev 1 15 16 17 18 19 Next All
Please log in or register to reply.
Live Events Refresh
Afreeca Starleague
10:00
R16 Group Selection Ceremony
Afreeca ASL 11349
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Harstem 180
Lowko56
Rex 0
StarCraft: Brood War
Flash 6857
Sea 3119
Horang2 1006
Hyuk 496
Stork 279
sSak 262
Pusan 234
Zeus 165
Killer 149
firebathero 142
[ Show more ]
ggaemo 132
PianO 126
Mind 96
Liquid`Ret 54
JulyZerg 47
Dewaltoss 44
ToSsGirL 44
soO 37
Backho 34
Sacsri 18
Bale 13
Noble 13
zelot 12
ajuk12(nOOB) 10
HiyA 10
Hm[arnc] 7
SilentControl 6
Dota 2
XaKoH 438
BananaSlamJamma235
XcaliburYe202
Counter-Strike
olofmeister2132
Stewie2K590
shoxiejesuss365
x6flipin302
zeus270
Other Games
singsing1638
ceh9591
crisheroes281
rGuardiaN33
Organizations
StarCraft: Brood War
UltimateBattle 251
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• LUISG 29
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• lizZardDota262
Upcoming Events
Wardi Open
26m
Rex0
Monday Night Weeklies
5h 26m
Replay Cast
13h 26m
Sparkling Tuna Cup
23h 26m
PiGosaur Monday
1d 13h
LiuLi Cup
2 days
Replay Cast
2 days
The PondCast
2 days
RSL Revival
2 days
Maru vs SHIN
MaNa vs MaxPax
RSL Revival
3 days
Reynor vs Astrea
Classic vs sOs
[ Show More ]
BSL Team Wars
4 days
Team Bonyth vs Team Dewalt
CranKy Ducklings
4 days
RSL Revival
4 days
GuMiho vs Cham
ByuN vs TriGGeR
Cosmonarchy
5 days
TriGGeR vs YoungYakov
YoungYakov vs HonMonO
HonMonO vs TriGGeR
[BSL 2025] Weekly
5 days
RSL Revival
5 days
Cure vs Bunny
Creator vs Zoun
BSL Team Wars
6 days
Team Hawk vs Team Sziky
Sparkling Tuna Cup
6 days
Liquipedia Results

Completed

Acropolis #4 - TS1
SEL Season 2 Championship
HCC Europe

Ongoing

Copa Latinoamericana 4
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20
CSL Season 18: Qualifier 2
CSL 2025 AUTUMN (S18)
Maestros of the Game
Sisters' Call Cup
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025

Upcoming

LASL Season 20
2025 Chongqing Offline CUP
BSL Season 21
BSL 21 Team A
Chzzk MurlocKing SC1 vs SC2 Cup #2
EC S1
BLAST Rivals Fall 2025
Skyesports Masters 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
MESA Nomadic Masters Fall
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.