• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 18:03
CET 00:03
KST 08:03
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
ByuL: The Forgotten Master of ZvT25Behind the Blue - Team Liquid History Book18Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8
Community News
Weekly Cups (Feb 9-15): herO doubles up2ACS replaced by "ASL Season Open" - Starts 21/0241LiuLi Cup: 2025 Grand Finals (Feb 10-16)46Weekly Cups (Feb 2-8): Classic, Solar, MaxPax win2Nexon's StarCraft game could be FPS, led by UMS maker16
StarCraft 2
General
How do you think the 5.0.15 balance patch (Oct 2025) for StarCraft II has affected the game? Behind the Blue - Team Liquid History Book ByuL: The Forgotten Master of ZvT Liquipedia WCS Portal Launched Kaelaris on the futue of SC2 and much more...
Tourneys
PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar) Sparkling Tuna Cup - Weekly Open Tournament StarCraft Evolution League (SC Evo Biweekly) How do the "codes" work in GSL? LiuLi Cup: 2025 Grand Finals (Feb 10-16)
Strategy
Custom Maps
Map Editor closed ? [A] Starcraft Sound Mod
External Content
Mutation # 514 Ulnar New Year The PondCast: SC2 News & Results Mutation # 513 Attrition Warfare Mutation # 512 Overclocked
Brood War
General
CasterMuse Youtube A cwal.gg Extension - Easily keep track of anyone A new season just kicks off Recent recommended BW games BGH Auto Balance -> http://bghmmr.eu/
Tourneys
Escore Tournament StarCraft Season 1 [Megathread] Daily Proleagues [LIVE] [S:21] ASL Season Open Day 1 Small VOD Thread 2.0
Strategy
Simple Questions, Simple Answers Zealot bombing is no longer popular? Fighting Spirit mining rates Current Meta
Other Games
General Games
Nintendo Switch Thread Battle Aces/David Kim RTS Megathread New broswer game : STG-World Diablo 2 thread ZeroSpace Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Mexico's Drug War Canadian Politics Mega-thread Ask and answer stupid questions here!
Fan Clubs
The IdrA Fan Club The herO Fan Club!
Media & Entertainment
[Req][Books] Good Fantasy/SciFi books [Manga] One Piece Anime Discussion Thread
Sports
Formula 1 Discussion 2024 - 2026 Football Thread TL MMA Pick'em Pool 2013
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
ASL S21 English Commentary…
namkraft
Inside the Communication of …
TrAiDoS
My 2025 Magic: The Gathering…
DARKING
Life Update and thoughts.
FuDDx
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1636 users

AlphaStar AI goes 10-1 against human pros in demonstration…

Forum Index > SC2 General
374 CommentsPost a Reply
Prev 1 15 16 17 18 19 Next All
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:32 GMT
#321
On February 01 2019 23:46 Dangermousecatdog wrote:
Show nested quote +
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.
Nebuchad
Profile Blog Joined December 2012
Switzerland12391 Posts
February 01 2019 17:38 GMT
#322
On February 02 2019 01:16 Acrofales wrote:
Show nested quote +
On February 01 2019 19:24 Nebuchad wrote:
I have two main issues with this whole thing.

1) It's pretty clear that control plays a bigger role than decision making in those wins. Continuing to make mass stalkers against 8 immortals is not good decision making. Having your whole army go back to the back of your base 4 times to deal with a warp prism while you could just cross the map and go fucking kill him instead is not good decision making. I think it looks especially bad because it's bad decision making in a way that is somewhat obvious, like, very few humans would make those bad decisions. We are used to "bad decision making" that is much subtler than that.

2) I don't like the PR strategy of DeepMind. It seems like they have to hype the fuck out of the accomplishments that they get, and it makes the whole thing seem really artificial to me. I don't have the exact quotes in mind any more but what they said about this starcraft experience felt overreaching when I read it; what they said about the poker experience was even worse, but the poker experience was somewhat more convincing than the starcraft one (it had issues as well).

edit: my mistake, just realized Libratus wasn't made by the same guys. But the same principle applies to both.

I don't think you can claim that making stalkers is bad decisionmaking at all. On paper, immortals hard counter stalkers. And in human control they do too. But if you have Alphastar micro capabilities, then suddenly they don't anymore. I think you're mixing cause and effect a bit here. Alphastar learned to make stalkers in most situations *because* it also learned to micro them incredibly well. That seems like a legitimate strategy. It's like when MKP showed that if you split your marines they didn't just get blasted into goo by a couple of banelings, and if you did it well, then suddenly banelings no longer countered marines very well at all. Sure, he microd marines FAR better than his contemporaries, but was his choice to then just make lots of marines a bad choice? Clearly not.

As for (2). They are a commercial enterprise. Of course they're going to hype their accomplishments. What did you think? That said, if you actually watched the video, the guys there are quite honest about their achievements and their aspirations. I don't think they believe they have "solved SC2". Or poker, for that matter, although I suspect poker is pretty close to being solved in all its various forms, whereas SC2 will take a bit longer. Still, Alphastar is quite a remarkable achievement, even with its flaws, and they are justifiably proud of it.


There comes a point where it wouldn't have worked though. I don't know how many immortals are required, if it's 10 or 14, but at some point Alphastar would have still lost. Mana perceived that the point was 8 because he was used to playing human stalkers, and so he was on the map with 8 immortals thinking he was safe when he wasn't. At some point he would have been safe.

I didn't put 2) in there because I find it particularly surprising, but because it makes me ask myself more questions about the whole enterprise than I would if they made their commentary more fair and analytical.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:42 GMT
#323
Well, apparently the internal agents favoured stalkers naturally. Maybe those that were given the artificial incentive build immortals were beating those stalker-heavy agents. We don't know for sure. But apparently there is a downside to making a lot of immortals when most other agents are heavy on stalkers. Maybe they would mostly lose against any other agent not making stalkers.

If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.
Poopi
Profile Blog Joined November 2010
France12909 Posts
February 01 2019 17:45 GMT
#324
So I guess this is Rodya third account?

I find it annoying that we could not see more games with the camera interface once MaNa finally won. They could have cut the off race TLO games and played more live matches.
WriterMaru
Haukinger
Profile Joined June 2012
Germany131 Posts
February 01 2019 17:45 GMT
#325
On February 02 2019 02:42 Polypoetes wrote:
If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.


This. That's exactly the point. As long as the game allows for insane apm, one cannot blame an AI for using it.
Nebuchad
Profile Blog Joined December 2012
Switzerland12391 Posts
Last Edited: 2019-02-01 17:56:08
February 01 2019 17:55 GMT
#326
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
Last Edited: 2019-02-01 18:57:11
February 01 2019 18:49 GMT
#327
On February 02 2019 02:55 Nebuchad wrote:
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.


A neural network does the same amount of calculations with random weights as it does with the weights it found through '200 years of gameplay'. So there you already go wrong.

Secondly, that AI was different from the AI that went 0-5 against Mana. So don't judge those AI's by the different AI in the last game.

Third, we saw AlphaGo become 'delusional' after it played very strongly. These kinds of blind spots and failures are natural for neural networks, because no amount of training can ever prepare a NN completely for any test input. If you want an AI that succeeds 99.99 of the time, then don use a NN. Yet despite it's delusions, AlphaGo was stronger than Lee Sedol. So once the NN goes wrong and is losing, you cannot judge it's strengths on what it is doing then. Korean commentators were literally laughing AlphaGo off stage. If you watch the AlphaGo documentary, which you probably have when you are debating here, you already know what I mean.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 20:13:18
February 01 2019 19:47 GMT
#328
On February 01 2019 18:12 maybenexttime wrote:
@pvsnp

Circumvent means to go around.


Really? I had no idea. Thanks for the pretentious dictionary copypaste.


AlphaStar makes this aspect of the game relatively irrelevant. The same way having a team of aimboters in CS:GO makes tactics irrelevant. The fact that the aimboters dominate any fight when they choose to engage doesn't make them tactically superior.


And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Just using the word "superior" betrays your lack of understanding. Superior to a completely arbitrary performance bechmark? Is Mana the only progamer out there? Change the benchmark and it's inferior, or superior, or whatever. Whether it happens to be superior or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.


The games MaNa lost were rigged in many ways when it comes to the engagements. First of all, there was a vast gap in terms of mechanics between MaNa and AlphaStar - both in terms of battle awareness (not being limited to one screen in case of the AI) and superhuman APM peaks. Secondly, MaNa's experience worked against him. He admitted that he misjudged many engagements due to not being used to playing opponents with such mechanics. Before each battle MaNa overestimated his chances whereas AlphaStar underestimated its chances.


And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion. Does it trivialize the problem if the decision is always to use superhuman micro in battle? Perhaps, to some degree. But to claim that the entire game as an incomplete information environment, from production to scouting to harass and finally to battle, is trivialized merely because superhuman micro is within the action set, is idiocy.

Any attention AlphaStar pays towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely incidental. Or PR driven. This is a technical project with technical goals, and Starcraft is simply the vehicle of choice. If Starcraft had zero progamers and zero support, the only things AlphaStar would lose are a useful (but nonessential) performance benchmark and a PR opportunity.

Bluntly put, everything you're so busy preaching about doesn't matter. Go play Starcraft and leave AI to the professionals.
Denominator of the Universe
TL+ Member
maybenexttime
Profile Blog Joined November 2006
Poland5755 Posts
Last Edited: 2019-02-01 20:38:06
February 01 2019 20:34 GMT
#329
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.

And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...

Anyway, not wasting my time any further with you.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2019-02-01 22:34:27
February 01 2019 21:21 GMT
#330
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 23:00:00
February 01 2019 22:29 GMT
#331
On February 02 2019 05:34 maybenexttime wrote:
Show nested quote +
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.


I was making the claim that you're missing the point, and everything I've heard since only reinforces that.


Show nested quote +
And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...


A terrible analogy that is either ignorant or disingenuous, given what you said earlier about circumventing the point. If the point is decisionmaking then deciding to use superhuman micro works just fine. Why would anything about human blink skill factor in? What bearing does that have on optimal decisionmaking from limited information?

Decisions on where and how to blink with superhuman micro don't matter in the context of winning the game. They matter in the context of choosing optimally given limited knowledge about current game state. Guess which one Deepmind cares about?


Anyway, not wasting my time any further with you.


Good to hear, correcting you was getting tiresome.

On February 02 2019 06:21 Grumbels wrote:
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.


I can see how you got to your conclusion that I have a very low and/or cynical opinion of Deepmind/Google. Especially since that's not exactly an uncommon opinion. But you've got it totally backwards. I love Deepmind and Google. I think AlphaStar is both technically interesting and very entertaining.

What annoys me is seeing all the prejudice surrounding AlphaStar, the misconceptions on how ML works, and the general ignorance about anything technical. Of course there are legitimate criticisms to be levelled at the way Deepmind has approached Starcraft with AlphaStar. But it's annoying, to say the least, when laymen pretend at expertise.

Google and Deepmind are doing Starcraft a favor by bringing so much attention. And yet so many people react by immediately attacking them and their work, in many cases without the slightest understanding of the technical aspects involved.
Denominator of the Universe
TL+ Member
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 00:32 GMT
#332
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.
IMBA IMBA IMBA IMBA IMBA IMBA
Athinira
Profile Joined August 2011
Denmark33 Posts
Last Edited: 2019-02-02 15:55:10
February 02 2019 15:53 GMT
#333
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.
"Science Vessel much? Yeah, i think so!" - Tasteless, 2008
Poopi
Profile Blog Joined November 2010
France12909 Posts
February 02 2019 16:00 GMT
#334
It’s actually a pretty big problem if AIs win due to outright superior mechanics because it makes the game a far easier problem to solve.
If you wanna push the field of reinforcement learning using a strategy game that relies on mechanics as well as strategy, you need a relatively fair fight to do so.
WriterMaru
Dangermousecatdog
Profile Joined December 2010
United Kingdom7084 Posts
February 02 2019 16:40 GMT
#335
On February 02 2019 02:32 Polypoetes wrote:
Show nested quote +
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.
Ej_
Profile Blog Joined January 2013
47656 Posts
February 02 2019 16:52 GMT
#336
On February 03 2019 01:40 Dangermousecatdog wrote:
Show nested quote +
On February 02 2019 02:32 Polypoetes wrote:
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.

I think that would be you here.
"Technically the dictionary has zero authority on the meaning or words" - Rodya
Nebuchad
Profile Blog Joined December 2012
Switzerland12391 Posts
February 02 2019 16:54 GMT
#337
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second


It obviously isn't. There are no mechanics in chess, what we're testing is the decision making. This is also what we should be expecting to be tested in Starcraft. It's not impressive to know that a program functions quicker than a hand, anyone could have told you that.
No will to live, no wish to die
perturbaitor
Profile Joined April 2015
2 Posts
February 02 2019 17:56 GMT
#338
inside this topic
two kinds of blindness exist
state and direction
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 18:41 GMT
#339
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.


Why do you think they limited the APM of the engine at all then? In the interview they said that they are happy if new strategies emerge from the AI and humans can learn something from it for their own game. This goal is inconsistent with the AI having large APM. My criticism is that they did not go far enough to ensure fair play.

Furthermore, I said that I was disappointed in it because we can't learn anything about strategy. I didn't say anything about fair competition. Microbots just aren't that fun to look at.
IMBA IMBA IMBA IMBA IMBA IMBA
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
February 02 2019 18:43 GMT
#340
On February 03 2019 02:56 perturbaitor wrote:
inside this topic
two kinds of blindness exist
state and direction

cool haiku but what does it mean?

1 post to be snarky in this thread are you polypoetes?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Prev 1 15 16 17 18 19 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 57m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SteadfastSC 197
Temp0 8
StarCraft: Brood War
Artosis 155
ggaemo 32
NaDa 12
Dota 2
canceldota970
syndereN268
Other Games
summit1g10424
Grubby3463
FrodaN2421
shahzam636
ToD388
C9.Mang0193
KnowMe109
Trikslyr72
Livibee65
Maynarde50
ZombieGrub46
Organizations
Counter-Strike
PGL791
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• Hupsaiya 79
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• Eskiya23 12
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• lizZardDota275
League of Legends
• TFBlade1809
Other Games
• imaqtpie1812
• Shiphtur242
Upcoming Events
OSC
57m
WardiTV Winter Champion…
12h 57m
Replay Cast
1d 9h
WardiTV Winter Champion…
1d 12h
The PondCast
2 days
Replay Cast
3 days
Korean StarCraft League
4 days
CranKy Ducklings
4 days
SC Evo Complete
4 days
Replay Cast
5 days
[ Show More ]
Sparkling Tuna Cup
5 days
uThermal 2v2 Circuit
5 days
Replay Cast
6 days
Wardi Open
6 days
Liquipedia Results

Completed

Proleague 2026-02-22
LiuLi Cup: 2025 Grand Finals
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
Acropolis #4 - TS5
WardiTV Winter 2026
PiG Sty Festival 7.0
Nations Cup 2026
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
eXTREMESLAND 2025
SL Budapest Major 2025

Upcoming

Jeongseon Sooper Cup
Spring Cup 2026
[S:21] ASL SEASON OPEN 2nd Round
[S:21] ASL SEASON OPEN 2nd Round Qualifier
Acropolis #4 - TS6
Acropolis #4
IPSL Spring 2026
HSC XXIX
uThermal 2v2 2026 Main Event
Bellum Gens Elite Stara Zagora 2026
RSL Revival: Season 4
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
FISSURE Playground #3
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.