|
On February 01 2019 23:46 Dangermousecatdog wrote:Show nested quote +On February 01 2019 01:58 Polypoetes wrote:On February 01 2019 01:24 Dangermousecatdog wrote: Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.
But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn. Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game? What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2?
Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer.
You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd!
Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'.
You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units.
But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three.
User was temp banned for this post.
|
On February 02 2019 01:16 Acrofales wrote:Show nested quote +On February 01 2019 19:24 Nebuchad wrote: I have two main issues with this whole thing.
1) It's pretty clear that control plays a bigger role than decision making in those wins. Continuing to make mass stalkers against 8 immortals is not good decision making. Having your whole army go back to the back of your base 4 times to deal with a warp prism while you could just cross the map and go fucking kill him instead is not good decision making. I think it looks especially bad because it's bad decision making in a way that is somewhat obvious, like, very few humans would make those bad decisions. We are used to "bad decision making" that is much subtler than that.
2) I don't like the PR strategy of DeepMind. It seems like they have to hype the fuck out of the accomplishments that they get, and it makes the whole thing seem really artificial to me. I don't have the exact quotes in mind any more but what they said about this starcraft experience felt overreaching when I read it; what they said about the poker experience was even worse, but the poker experience was somewhat more convincing than the starcraft one (it had issues as well).
edit: my mistake, just realized Libratus wasn't made by the same guys. But the same principle applies to both. I don't think you can claim that making stalkers is bad decisionmaking at all. On paper, immortals hard counter stalkers. And in human control they do too. But if you have Alphastar micro capabilities, then suddenly they don't anymore. I think you're mixing cause and effect a bit here. Alphastar learned to make stalkers in most situations *because* it also learned to micro them incredibly well. That seems like a legitimate strategy. It's like when MKP showed that if you split your marines they didn't just get blasted into goo by a couple of banelings, and if you did it well, then suddenly banelings no longer countered marines very well at all. Sure, he microd marines FAR better than his contemporaries, but was his choice to then just make lots of marines a bad choice? Clearly not. As for (2). They are a commercial enterprise. Of course they're going to hype their accomplishments. What did you think? That said, if you actually watched the video, the guys there are quite honest about their achievements and their aspirations. I don't think they believe they have "solved SC2". Or poker, for that matter, although I suspect poker is pretty close to being solved in all its various forms, whereas SC2 will take a bit longer. Still, Alphastar is quite a remarkable achievement, even with its flaws, and they are justifiably proud of it.
There comes a point where it wouldn't have worked though. I don't know how many immortals are required, if it's 10 or 14, but at some point Alphastar would have still lost. Mana perceived that the point was 8 because he was used to playing human stalkers, and so he was on the map with 8 immortals thinking he was safe when he wasn't. At some point he would have been safe.
I didn't put 2) in there because I find it particularly surprising, but because it makes me ask myself more questions about the whole enterprise than I would if they made their commentary more fair and analytical.
|
Well, apparently the internal agents favoured stalkers naturally. Maybe those that were given the artificial incentive build immortals were beating those stalker-heavy agents. We don't know for sure. But apparently there is a downside to making a lot of immortals when most other agents are heavy on stalkers. Maybe they would mostly lose against any other agent not making stalkers.
If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.
|
France12758 Posts
So I guess this is Rodya third account?
I find it annoying that we could not see more games with the camera interface once MaNa finally won. They could have cut the off race TLO games and played more live matches.
|
On February 02 2019 02:42 Polypoetes wrote: If stalkers counter everything but immortals, and immortals get countered by everything but stalkers, then it probably is still best to make stalkers. If you don't like this, take it up with Blizzard, not with Deepmind.
This. That's exactly the point. As long as the game allows for insane apm, one cannot blame an AI for using it.
|
I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.
|
On February 02 2019 02:55 Nebuchad wrote: I find it difficult to make many charitable assumptions on how much the machine calculated when I look at that warp prism defense.
A neural network does the same amount of calculations with random weights as it does with the weights it found through '200 years of gameplay'. So there you already go wrong.
Secondly, that AI was different from the AI that went 0-5 against Mana. So don't judge those AI's by the different AI in the last game.
Third, we saw AlphaGo become 'delusional' after it played very strongly. These kinds of blind spots and failures are natural for neural networks, because no amount of training can ever prepare a NN completely for any test input. If you want an AI that succeeds 99.99 of the time, then don use a NN. Yet despite it's delusions, AlphaGo was stronger than Lee Sedol. So once the NN goes wrong and is losing, you cannot judge it's strengths on what it is doing then. Korean commentators were literally laughing AlphaGo off stage. If you watch the AlphaGo documentary, which you probably have when you are debating here, you already know what I mean.
|
On February 01 2019 18:12 maybenexttime wrote: @pvsnp
Circumvent means to go around.
Really? I had no idea. Thanks for the pretentious dictionary copypaste.
AlphaStar makes this aspect of the game relatively irrelevant. The same way having a team of aimboters in CS:GO makes tactics irrelevant. The fact that the aimboters dominate any fight when they choose to engage doesn't make them tactically superior.
And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Just using the word "superior" betrays your lack of understanding. Superior to a completely arbitrary performance bechmark? Is Mana the only progamer out there? Change the benchmark and it's inferior, or superior, or whatever. Whether it happens to be superior or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.
The games MaNa lost were rigged in many ways when it comes to the engagements. First of all, there was a vast gap in terms of mechanics between MaNa and AlphaStar - both in terms of battle awareness (not being limited to one screen in case of the AI) and superhuman APM peaks. Secondly, MaNa's experience worked against him. He admitted that he misjudged many engagements due to not being used to playing opponents with such mechanics. Before each battle MaNa overestimated his chances whereas AlphaStar underestimated its chances.
And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.
Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion. Does it trivialize the problem if the decision is always to use superhuman micro in battle? Perhaps, to some degree. But to claim that the entire game as an incomplete information environment, from production to scouting to harass and finally to battle, is trivialized merely because superhuman micro is within the action set, is idiocy.
Any attention AlphaStar pays towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely incidental. Or PR driven. This is a technical project with technical goals, and Starcraft is simply the vehicle of choice. If Starcraft had zero progamers and zero support, the only things AlphaStar would lose are a useful (but nonessential) performance benchmark and a PR opportunity.
Bluntly put, everything you're so busy preaching about doesn't matter. Go play Starcraft and leave AI to the professionals.
|
On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste.
I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.
And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.
And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.
Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.
Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.
That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...
Anyway, not wasting my time any further with you.
|
@pvsnp, That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.
And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.
Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.
Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.
|
On February 02 2019 05:34 maybenexttime wrote:Show nested quote +On February 02 2019 04:47 pvsnp wrote:Really? I had no idea. Thanks for the pretentious dictionary copypaste. I thought you were implying that I somehow said that superhuman mechanics allow AlphaStar pierce through the fog of war. I guess it's you making that claim.
I was making the claim that you're missing the point, and everything I've heard since only reinforces that.
Show nested quote +And.....? AlphaStar's goal is not to be a tactically superior Starcraft player. Whether it happens to be or not is purely incidental to the real goals of making decisions with incomplete information. Which, as you are either overlooking or ignoring, applies to all aspects of the game, not just micro.
And more of the inane blathering about the same talking points. Did you even bother to read my post? AlphaStar doesn't really care about rigging. Deepmind doesn't really care. Google doesn't really care.
Because "rigging" implies that there is a "proper" (human) way for the AI to play Starcraft, which there isn't, as far as Deepmind is concerned. Because the true goal of AlphaStar is not to play Starcraft like a human. It's to further understand AI decision-making with incomplete information. And if the optimal decision is to use superhuman micro, then that's a useful conclusion.
Any attention they pay towards Starcraft as a game, the fans, the fairness, the showmatches, is more or less entirely PR. This is a technical project with technical goals and Starcraft is simply the vehicle of choice.
That was an analogy meant to show how an AI can completely ignore the decision making aspect of a game that normally has one. Superhuman Stalker micro does not put the AI's decision making to the test, let alone decision making in an incomplete information environment (I'm pretty sure that AlphaStar's Stalker micro would look the same if they were to make SC2 a game of complete information). While there is decision making involved in Blink Stalker micro, it's the inhuman speed and precision of the execution that makes all the difference, not the decisions made. It doesn't matter which Stalker you chose to blink and where if you can blink five of them faster than a human would blink just one...
A terrible analogy that is either ignorant or disingenuous, given what you said earlier about circumventing the point. If the point is decisionmaking then deciding to use superhuman micro works just fine. Why would anything about human blink skill factor in? What bearing does that have on optimal decisionmaking from limited information?
Decisions on where and how to blink with superhuman micro don't matter in the context of winning the game. They matter in the context of choosing optimally given limited knowledge about current game state. Guess which one Deepmind cares about?
Anyway, not wasting my time any further with you.
Good to hear, correcting you was getting tiresome.
On February 02 2019 06:21 Grumbels wrote: @pvsnp, That’s really silly. AlphaStar is to a large extent a collaboration between Blizzard and Deepmind. Do you think Blizzard has no stake in SC2? Deepmind can’t just use the custom-made SC2 interface, the set of replays Blizzard took fom the ladder etc. and do a “hit-and-run” without antagonizing Blizzard. Furthermore, the video game industry makes 150 billion dollar a year in revenue, so creating agents that can play video games have potential utility and therefore economic value. As an example, without APM limits and human-esque play you can’t use these agents for replacing the in-game AI.
And why do you think that Deepmind targeted Go and chess? Or Atari games? Or SC2? Maybe because it was founded by a former chess prodigy who is obsessed with board games, and maybe because it largely attracts researchers who are gaming enthusiasts. Deepmind is not just some nebulous google research center plotting world domination, they are also a prestige project and they have some degree of autonomy. You can’t be completely cynical about them.
Fact of the matter is, everyone in this thread who was taking this tone of “suck it up, Deepmind considers SC2 beneath itself, it will vulture-like scavenge what it can from it and then move on, their true goal is skynet/world domination”... they are likely to be proven wrong as the co-leads of AlphaStar already conceded they would look at the APM limits and probably adjust them.
Of course you shouldn’t trust them, but they just aren’t beyond sentimental and moral considerations.
I can see how you got to your conclusion that I have a very low and/or cynical opinion of Deepmind/Google. Especially since that's not exactly an uncommon opinion. But you've got it totally backwards. I love Deepmind and Google. I think AlphaStar is both technically interesting and very entertaining.
What annoys me is seeing all the prejudice surrounding AlphaStar, the misconceptions on how ML works, and the general ignorance about anything technical. Of course there are legitimate criticisms to be levelled at the way Deepmind has approached Starcraft with AlphaStar. But it's annoying, to say the least, when laymen pretend at expertise.
Google and Deepmind are doing Starcraft a favor by bringing so much attention. And yet so many people react by immediately attacking them and their work, in many cases without the slightest understanding of the technical aspects involved.
|
This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.
1. APM was up to 1000 in the blink Stalker battles as far as I have seen. 2. Click precision should be lowered to match humans. 3. Perception should be lowered so it can't detect invisible units immediately. 4. It shouldn't be able to perceive the whole map and act in more than one screen.
|
On February 02 2019 09:32 Greenei wrote: This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.
1. APM was up to 1000 in the blink Stalker battles as far as I have seen. 2. Click precision should be lowered to match humans. 3. Perception should be lowered so it can't detect invisible units immediately. 4. It shouldn't be able to perceive the whole map and act in more than one screen. It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice.
Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI.
Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.
|
France12758 Posts
It’s actually a pretty big problem if AIs win due to outright superior mechanics because it makes the game a far easier problem to solve. If you wanna push the field of reinforcement learning using a strategy game that relies on mechanics as well as strategy, you need a relatively fair fight to do so.
|
On February 02 2019 02:32 Polypoetes wrote:Show nested quote +On February 01 2019 23:46 Dangermousecatdog wrote:On February 01 2019 01:58 Polypoetes wrote:On February 01 2019 01:24 Dangermousecatdog wrote: Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.
But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn. Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game? What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2? Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer. You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd! Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'. You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units. But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three. User was temp banned for this post. So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse.
|
On February 03 2019 01:40 Dangermousecatdog wrote:Show nested quote +On February 02 2019 02:32 Polypoetes wrote:On February 01 2019 23:46 Dangermousecatdog wrote:On February 01 2019 01:58 Polypoetes wrote:On February 01 2019 01:24 Dangermousecatdog wrote: Polypoetes, you make an awful lot of assumption that doesn't quite bear out. Pro players generally do want to win over if they win decisively. You get the same ladder points and tournament money no matter how much you think you have won or lost a game by.
But this completely ignores what it means to be human and how humans actually learn. I am making assumptions? Your claim is literally that humans are able to objectively take their human experiences, objectively form their goal, and rewire their brains so it happens more. That is not how humans learn. Humans learn by re-enforced learning as well. But what is the re-enforcement? You clicking on the screen, trying to kill the enemy army and it either working or failing? Or you looking at the ladder points after the game? What are you even saying? You quote me but don't actually say anything that interacts what I am saying. Your assumptions are still false assumptions. And then you write some nonsense. Do you even play SC2? Are you that dense? Humans don't make a conscious effort to learn. Playing RTS is mostly instincts. Of course a player is trying to win. The question is if and how a player knows what actions make her win. Ladder points are not the re-enforcement for human learning. The human experience is how they learn. This is why you can and do learn by false reinforcement. For example, beginning players turtling up. They think they are playing better because the duration of the game is longer. You call them 'assumptions', but this is exactly in line with everything I have heard modern experts on human learning talk. It also makes sense in my complete scientific world view. Yet your view is that humans will themselves to learn in an objective way because of ladder points and tournament money. Absurd! Everything I said exactly addresses your absurdly false claims. But this must be your 'tactic'. You ask me if I play SC2. Of course I don't. I think it is a bad and boring game, which is born out by AlphaGo. It shows that the perfect way to play, either as an AI or as a human, is to 'circumvent' interesting game play and rely on mechanics and superior micro of one or two units. But why is it relevant? We aren't talking about SC2. We are talking about how humans learn and how AIs learn. I would like to state to you 'Do you even code?' or 'Do you even have a general understanding of the cognitive sciences?'. But it seems clear to me that you have problems thinking, comprehending the English language, expressing yourself in the English language, or all three. User was temp banned for this post. So, no you don't play SC2. I thought so. And from the sounds of it no other game or sport either. It's fairly obvious that you are talking out of your arse. I think that would be you here.
|
On February 03 2019 00:53 Athinira wrote:Show nested quote +On February 02 2019 09:32 Greenei wrote: This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.
1. APM was up to 1000 in the blink Stalker battles as far as I have seen. 2. Click precision should be lowered to match humans. 3. Perception should be lowered so it can't detect invisible units immediately. 4. It shouldn't be able to perceive the whole map and act in more than one screen. Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second
It obviously isn't. There are no mechanics in chess, what we're testing is the decision making. This is also what we should be expecting to be tested in Starcraft. It's not impressive to know that a program functions quicker than a hand, anyone could have told you that.
|
inside this topic two kinds of blindness exist state and direction
|
On February 03 2019 00:53 Athinira wrote:Show nested quote +On February 02 2019 09:32 Greenei wrote: This video was quite disappointing. The AI has numerous mechanical advantages over the human players. Under these circumstances we can not learn anything about strategy or the hidden beauty that is left in SC2.
1. APM was up to 1000 in the blink Stalker battles as far as I have seen. 2. Click precision should be lowered to match humans. 3. Perception should be lowered so it can't detect invisible units immediately. 4. It shouldn't be able to perceive the whole map and act in more than one screen. It's an AI research project, not a 100% fairness project. The AI is not intended to compete in professional leagues. It's intended to further the research in the field of AI - and as someone else said, StarCraft is merely the vessel of choice. Complaining that the AI has superior micro is the equivalent of complaining that chess computers can calculate millions of positions per second (Deep Blue reached a peak of around ~120 million positions per second when it beat Kasparov). Turning that argument around, you could also complain that the human can more easily outsmart the AI, and it isn't fair for the AI. Both are stupid arguments. Humans and computers aren't the same, and they're not intended to compete - which is why they generally don't. In chess, humans compete in human-only tournaments, and AI's compete in AI-only tournaments. And when humans and AI's do compete against each other from time to time, it's either (1) for fun, (2) for show or (3) for learning. It's NOT for competition.
Why do you think they limited the APM of the engine at all then? In the interview they said that they are happy if new strategies emerge from the AI and humans can learn something from it for their own game. This goal is inconsistent with the AI having large APM. My criticism is that they did not go far enough to ensure fair play.
Furthermore, I said that I was disappointed in it because we can't learn anything about strategy. I didn't say anything about fair competition. Microbots just aren't that fun to look at.
|
Canada11355 Posts
On February 03 2019 02:56 perturbaitor wrote: inside this topic two kinds of blindness exist state and direction cool haiku but what does it mean?
1 post to be snarky in this thread are you polypoetes?
|
|
|
|