• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 14:48
CEST 20:48
KST 03:48
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL19] Finals Recap: Standing Tall9HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0TL Team Map Contest #5: Presented by Monster Energy6
Community News
Flash Announces Hiatus From ASL46Weekly Cups (June 23-29): Reynor in world title form?12FEL Cracov 2025 (July 27) - $8000 live event16Esports World Cup 2025 - Final Player Roster16Weekly Cups (June 16-22): Clem strikes back1
StarCraft 2
General
The SCII GOAT: A statistical Evaluation Statistics for vetoed/disliked maps Esports World Cup 2025 - Final Player Roster How does the number of casters affect your enjoyment of esports? Weekly Cups (June 23-29): Reynor in world title form?
Tourneys
RSL: Revival, a new crowdfunded tournament series [GSL 2025] Code S: Season 2 - Semi Finals & Finals $5,100+ SEL Season 2 Championship (SC: Evo) FEL Cracov 2025 (July 27) - $8000 live event HomeStory Cup 27 (June 27-29)
Strategy
How did i lose this ZvP, whats the proper response Simple Questions Simple Answers
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 480 Moths to the Flame Mutation # 479 Worn Out Welcome Mutation # 478 Instant Karma Mutation # 477 Slow and Steady
Brood War
General
Flash Announces Hiatus From ASL [ASL19] Finals Recap: Standing Tall BGH Auto Balance -> http://bghmmr.eu/ Player “Jedi” cheat on CSL Help: rep cant save
Tourneys
[Megathread] Daily Proleagues [BSL20] GosuLeague RO16 - Tue & Wed 20:00+CET The Casual Games of the Week Thread [BSL20] ProLeague LB Final - Saturday 20:00 CET
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile What do you want from future RTS games? Beyond All Reason
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Trading/Investing Thread The Games Industry And ATVI
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece [\m/] Heavy Metal Thread
Sports
2024 - 2025 Football Thread NBA General Discussion Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
Blogs
Culture Clash in Video Games…
TrAiDoS
from making sc maps to makin…
Husyelt
Blog #2
tankgirl
StarCraft improvement
iopq
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 690 users

AlphaStar AI goes 10-1 against human pros in demonstration…

Forum Index > SC2 General
374 CommentsPost a Reply
Prev 1 15 16 17 18 19 Next All
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:32 GMT
#321
On February 01 2019 23:46 Dangermousecatdog wrote:
Show nested quote +
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.
Nebuchad
Profile Blog Joined December 2012
Switzerland12153 Posts
February 01 2019 17:38 GMT
#322
On February 02 2019 01:16 Acrofales wrote:
Show nested quote +
On February 01 2019 19:24 Nebuchad wrote:
I have two main issues with this whole thing.

1) It's pretty clear that control plays a bigger role than decision making in those wins. Continuing to make mass stalkers against 8 immortals is not good decision making. Having your whole army go back to the back of your base 4 times to deal with a warp prism while you could just cross the map and go fucking kill him instead is not good decision making. I think it looks especially bad because it's bad decision making in a way that is somewhat obvious, like, very few humans would make those bad decisions. We are used to "bad decision making" that is much subtler than that.

2) I don't like the PR strategy of DeepMind. It seems like they have to hype the fuck out of the accomplishments that they get, and it makes the whole thing seem really artificial to me. I don't have the exact quotes in mind any more but what they said about this starcraft experience felt overreaching when I read it; what they said about the poker experience was even worse, but the poker experience was somewhat more convincing than the starcraft one (it had issues as well).

edit: my mistake, just realized Libratus wasn't made by the same guys. But the same principle applies to both.

I don't think you can claim that making stalkers is bad decisionmaking at all. On paper, immortals hard counter stalkers. And in human control they do too. But if you have Alphastar micro capabilities, then suddenly they don't anymore. I think you're mixing cause and effect a bit here. Alphastar learned to make stalkers in most situations *because* it also learned to micro them incredibly well. That seems like a legitimate strategy. It's like when MKP showed that if you split your marines they didn't just get blasted into goo by a couple of banelings, and if you did it well, then suddenly banelings no longer countered marines very well at all. Sure, he microd marines FAR better than his contemporaries, but was his choice to then just make lots of marines a bad choice? Clearly not.

As for (2). They are a commercial enterprise. Of course they're going to hype their accomplishments. What did you think? That said, if you actually watched the video, the guys there are quite honest about their achievements and their aspirations. I don't think they believe they have "solved SC2". Or poker, for that matter, although I suspect poker is pretty close to being solved in all its various forms, whereas SC2 will take a bit longer. Still, Alphastar is quite a remarkable achievement, even with its flaws, and they are justifiably proud of it.


There comes a point where it wouldn't have worked though. I don't know how many immortals are required, if it's 10 or 14, but at some point Alphastar would have still lost. Mana perceived that the point was 8 because he was used to playing human stalkers, and so he was on the map with 8 immortals thinking he was safe when he wasn't. At some point he would have been safe.

I didn't put 2) in there because I find it particularly surprising, but because it makes me ask myself more questions about the whole enterprise than I would if they made their commentary more fair and analytical.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
February 01 2019 17:42 GMT
#323
Well, apparently the internal agents favoured stalkers naturally. Maybe those that were given the artificial incentive build immortals were beating those stalker-heavy agents. We don't know for sure. But apparently there is a downside to making a lot of immortals when most other agents are heavy on stalkers. Maybe they would mostly lose against any other agent not making stalkers.

If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.
Poopi
Profile Blog Joined November 2010
France12789 Posts
February 01 2019 17:45 GMT
#324
So I guess this is Rodya third account?

I find it annoying that we could not see more games with the camera interface once MaNa finally won. They could have cut the off race TLO games and played more live matches.
WriterMaru
Haukinger
Profile Joined June 2012
Germany131 Posts
February 01 2019 17:45 GMT
#325
On February 02 2019 02:42 Polypoetes wrote:
If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.


This. That's exactly the point. As long as the game allows for insane apm, one cannot blame an AI for using it.
Nebuchad
Profile Blog Joined December 2012
Switzerland12153 Posts
Last Edited: 2019-02-01 17:56:08
February 01 2019 17:55 GMT
#326
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.
No will to live, no wish to die
Polypoetes
Profile Joined January 2019
20 Posts
Last Edited: 2019-02-01 18:57:11
February 01 2019 18:49 GMT
#327
On February 02 2019 02:55 Nebuchad wrote:
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.


A neural network does the same amount of calculations with random weights as it does with the weights it found through '200 years of gameplay'. So there you already go wrong.

Secondly, that AI was different from the AI that went 0-5 against Mana. So don't judge those AI's by the different AI in the last game.

Third, we saw AlphaGo become 'delusional' after it played very strongly. These kinds of blind spots and failures are natural for neural networks, because no amount of training can ever prepare a NN completely for any test input. If you want an AI that succeeds 99.99 of the time, then don use a NN. Yet despite it's delusions, AlphaGo was stronger than Lee Sedol. So once the NN goes wrong and is losing, you cannot judge it's strengths on what it is doing then. Korean commentators were literally laughing AlphaGo off stage. If you watch the AlphaGo documentary, which you probably have when you are debating here, you already know what I mean.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 20:13:18
February 01 2019 19:47 GMT
#328
On February 01 2019 18:12 maybenexttime wrote:
@pvsnp

Circumvent means to go around.


Really? I had no idea. Thanks for the pretentious dictionary copypaste.


AlphaStar makes this aspect of the game relatively irrelevant. The same way having a team of aimboters in CS:GO makes tactics irrelevant. The fact that the aimboters dominate any fight when they choose to engage doesn't make them tactically superior.


And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Just using the word "superior" betrays your lack of understanding. Superior to a completely arbitrary performance bechmark? Is Mana the only progamer out there? Change the benchmark and it's inferior, or superior, or whatever. Whether it happens to be superior or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.


The games MaNa lost were rigged in many ways when it comes to the engagements. First of all, there was a vast gap in terms of mechanics between MaNa and AlphaStar - both in terms of battle awareness (not being limited to one screen in case of the AI) and superhuman APM peaks. Secondly, MaNa's experience worked against him. He admitted that he misjudged many engagements due to not being used to playing opponents with such mechanics. Before each battle MaNa overestimated his chances whereas AlphaStar underestimated its chances.


And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion. Does it trivialize the problem if the decision is always to use superhuman micro in battle? Perhaps, to some degree. But to claim that the entire game as an incomplete information environment, from production to scouting to harass and finally to battle, is trivialized merely because superhuman micro is within the action set, is idiocy.

Any attention AlphaStar pays towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely incidental. Or PR driven. This is a technical project with technical goals, and Starcraft is simply the vehicle of choice. If Starcraft had zero progamers and zero support, the only things AlphaStar would lose are a useful (but nonessential) performance benchmark and a PR opportunity.

Bluntly put, everything you're so busy preaching about doesn't matter. Go play Starcraft and leave AI to the professionals.
Denominator of the Universe
TL+ Member
maybenexttime
Profile Blog Joined November 2006
Poland5536 Posts
Last Edited: 2019-02-01 20:38:06
February 01 2019 20:34 GMT
#329
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.

And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...

Anyway, not wasting my time any further with you.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2019-02-01 22:34:27
February 01 2019 21:21 GMT
#330
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2019-02-01 23:00:00
February 01 2019 22:29 GMT
#331
On February 02 2019 05:34 maybenexttime wrote:
Show nested quote +
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.


I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.


I was making the claim that you're missing the point, and everything I've heard since only reinforces that.


Show nested quote +
And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.

And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.

Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.

Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.


That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...


A terrible analogy that is either ignorant or disingenuous, given what you said earlier about circumventing the point. If the point is decisionmaking then deciding to use superhuman micro works just fine. Why would anything about human blink skill factor in? What bearing does that have on optimal decisionmaking from limited information?

Decisions on where and how to blink with superhuman micro don't matter in the context of winning the game. They matter in the context of choosing optimally given limited knowledge about current game state. Guess which one Deepmind cares about?


Anyway, not wasting my time any further with you.


Good to hear, correcting you was getting tiresome.

On February 02 2019 06:21 Grumbels wrote:
@pvsnp,
That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.

And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.

Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.

Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.


I can see how you got to your conclusion that I have a very low and/or cynical opinion of Deepmind/Google. Especially since that's not exactly an uncommon opinion. But you've got it totally backwards. I love Deepmind and Google. I think AlphaStar is both technically interesting and very entertaining.

What annoys me is seeing all the prejudice surrounding AlphaStar, the misconceptions on how ML works, and the general ignorance about anything technical. Of course there are legitimate criticisms to be levelled at the way Deepmind has approached Starcraft with AlphaStar. But it's annoying, to say the least, when laymen pretend at expertise.

Google and Deepmind are doing Starcraft a favor by bringing so much attention. And yet so many people react by immediately attacking them and their work, in many cases without the slightest understanding of the technical aspects involved.
Denominator of the Universe
TL+ Member
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 00:32 GMT
#332
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.
IMBA IMBA IMBA IMBA IMBA IMBA
Athinira
Profile Joined August 2011
Denmark33 Posts
Last Edited: 2019-02-02 15:55:10
February 02 2019 15:53 GMT
#333
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.
"Science Vessel much? Yeah, i think so!" - Tasteless, 2008
Poopi
Profile Blog Joined November 2010
France12789 Posts
February 02 2019 16:00 GMT
#334
It’s actually a pretty big problem if AIs win due to outright superior mechanics because it makes the game a far easier problem to solve.
If you wanna push the field of reinforcement learning using a strategy game that relies on mechanics as well as strategy, you need a relatively fair fight to do so.
WriterMaru
Dangermousecatdog
Profile Joined December 2010
United Kingdom7084 Posts
February 02 2019 16:40 GMT
#335
On February 02 2019 02:32 Polypoetes wrote:
Show nested quote +
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.
Ej_
Profile Blog Joined January 2013
47656 Posts
February 02 2019 16:52 GMT
#336
On February 03 2019 01:40 Dangermousecatdog wrote:
Show nested quote +
On February 02 2019 02:32 Polypoetes wrote:
On February 01 2019 23:46 Dangermousecatdog wrote:
On February 01 2019 01:58 Polypoetes wrote:
On February 01 2019 01:24 Dangermousecatdog wrote:
Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.


But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn.

Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game?
What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?


Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.

You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!

Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.

You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.

But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.



User was temp banned for this post.

So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.

I think that would be you here.
"Technically the dictionary has zero authority on the meaning or words" - Rodya
Nebuchad
Profile Blog Joined December 2012
Switzerland12153 Posts
February 02 2019 16:54 GMT
#337
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second


It obviously isn't. There are no mechanics in chess, what we're testing is the decision making. This is also what we should be expecting to be tested in Starcraft. It's not impressive to know that a program functions quicker than a hand, anyone could have told you that.
No will to live, no wish to die
perturbaitor
Profile Joined April 2015
2 Posts
February 02 2019 17:56 GMT
#338
inside this topic
two kinds of blindness exist
state and direction
Greenei
Profile Joined November 2011
Germany1754 Posts
February 02 2019 18:41 GMT
#339
On February 03 2019 00:53 Athinira wrote:
Show nested quote +
On February 02 2019 09:32 Greenei wrote:
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.

1. APM was up to 1000 in the blink Stalker battles as far as I have seen.
2. Click precision should be lowered to match humans.
3. Perception should be lowered so it can't detect invisible units immediately.
4. It shouldn't be able to perceive the whole map and act in more than one screen.

It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.

Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.

Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.


Why do you think they limited the APM of the engine at all then? In the interview they said that they are happy if new strategies emerge from the AI and humans can learn something from it for their own game. This goal is inconsistent with the AI having large APM. My criticism is that they did not go far enough to ensure fair play.

Furthermore, I said that I was disappointed in it because we can't learn anything about strategy. I didn't say anything about fair competition. Microbots just aren't that fun to look at.
IMBA IMBA IMBA IMBA IMBA IMBA
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
February 02 2019 18:43 GMT
#340
On February 03 2019 02:56 perturbaitor wrote:
inside this topic
two kinds of blindness exist
state and direction

cool haiku but what does it mean?

1 post to be snarky in this thread are you polypoetes?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Prev 1 15 16 17 18 19 Next All
Please log in or register to reply.
Live Events Refresh
WardiTV European League
16:00
Swiss Groups Day 2
SKillous vs MixuLIVE!
WardiTV1046
TKL 196
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
mouzHeroMarine 490
TKL 196
RotterdaM 195
BRAT_OK 111
JuggernautJason70
MindelVK 29
StarCraft: Brood War
Britney 21078
Dewaltoss 142
Aegong 48
sas.Sziky 47
Sacsri 41
soO 36
scan(afreeca) 25
zelot 23
GoRush 10
IntoTheRainbow 8
[ Show more ]
yabsab 5
Dota 2
Gorgc6633
qojqva2668
capcasts220
League of Legends
Dendi1390
Counter-Strike
fl0m1493
flusha392
sgares175
Super Smash Bros
Mew2King180
Heroes of the Storm
Liquid`Hasu459
Other Games
FrodaN2534
summit1g2377
ceh9689
elazer220
Trikslyr59
kaitlyn39
Sick32
trigger1
Organizations
StarCraft 2
angryscii 23
Other Games
BasetradeTV16
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• LUISG 3
• intothetv
• IndyKCrew
• sooper7s
• Migwel
• AfreecaTV YouTube
• LaughNgamezSOOP
• Kozan
StarCraft: Brood War
• 80smullet 9
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Nemesis4731
• Jankos1986
• masondota2407
Other Games
• imaqtpie990
• Shiphtur349
Upcoming Events
Replay Cast
5h 12m
RSL Revival
15h 12m
herO vs SHIN
Reynor vs Cure
OSC
18h 12m
WardiTV European League
21h 12m
Scarlett vs Percival
Jumy vs ArT
YoungYakov vs Shameless
uThermal vs Fjant
Nicoract vs goblin
Harstem vs Gerald
FEL
21h 12m
Korean StarCraft League
1d 8h
CranKy Ducklings
1d 15h
RSL Revival
1d 15h
FEL
1d 21h
Sparkling Tuna Cup
2 days
[ Show More ]
RSL Revival
2 days
FEL
2 days
BSL: ProLeague
2 days
Dewalt vs Bonyth
Replay Cast
4 days
Replay Cast
4 days
The PondCast
5 days
Replay Cast
6 days
RSL Revival
6 days
Liquipedia Results

Completed

Proleague 2025-06-28
HSC XXVII
Heroes 10 EU

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
Acropolis #3
KCM Race Survival 2025 Season 2
CSL 17: 2025 SUMMER
Copa Latinoamericana 4
Championship of Russia 2025
RSL Revival: Season 1
Murky Cup #2
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025

Upcoming

CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
K-Championship
uThermal 2v2 Main Event
SEL Season 2 Championship
FEL Cracov 2025
Esports World Cup 2025
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.