|
On July 28 2019 22:28 Goolpsy wrote: The purpose of the AI research is not to "beat humans" or "accomplish task X". We've done that..
It's essentially to be able to make a self-learning AI that can "solve problems". Why Self-learning? --> Because there are problems we humans don't even understand (or haveexperience in yet), and we'd hope the AI would be able to solve it. (I am not talking about SKYNET here).
The problem is always, that you need something to measure your product against. How good is it? When should we stop? How much power is required to train it? Chess and Go were good challenges, because the games themselves are simple with perfect information; AND humans are amazing at them. At the same time, the game have enough possible variations that it is not solvable by a brute-force approach.
Imagine using AI's for selfdriving cars; Driving is easy. But what if a moron drives too close to you? or the car in front drives in the middle of the road? or in your side of the road? What if a deer runs in front of your car. What if you get hit by a bird. Imagine you get hit by a bird and the AI goes like this: "uhuh weird sensor reading, unexpected error, abort abort .. " and drives off the road. or you program it to ignore such, and it hits a person: "weird sensor reading.. oh well nevermind"
Back to start Starcraft AI; Humans are really good at starcraft, Obviously macro and micro helps a lot, but our main strength is being able to solve problems (or attempt to solve problems). Starcraft is a complex game with imperfect information and many many many problems to continually solve.
This is why it is interesting to test the AI's against humans in this area. We are sufficiently good at the game, to be worth competing against (for problems solving skills). Here we have a measure of "how good did we actually become". Winning with 1500eAPM stalker micro is not solving problems. Figuring out what to do against an opponent who stalker rushes you and then goes "mass void rays" IS.
I think much of the "disappointment" many are feeling, is not that the AI is unbeatable.. but that they're so EASILY "abusable". It doesnt understand what AIR is, or where it is (bile drops). It doesnt understand that turrets can have upgraded range It doesnt know what a Widowmine is - even when its visible
As for worker scouting; it is not necesarily important (if you are doing aggressive strategies, you are getting information all the time and you can infer A LOT from it). But not scouting at all and doing a blind build and not adapting to what it eventually sees, is not problem solving :/
It is "funny" however, because we humans think scouting is the EASIEST way to solve the problem of "what to build" and "when to build", so it's amazing that the AI is still so 'dumb' and 'unrefined' and still doesn't use this "easy" way of overcoming that obstacle. it hasn't "beat humans"
|
On July 29 2019 04:12 Cyro wrote:Show nested quote +Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it. I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.
That's what I am referring to. Not that sure what happened with Go because I am not that tuned in with the community there. But in chess, if Deepmind started working with a select few chess players, those chess players would gain a huge advantage. Engine analysis is super crucial to your play. So chess is actually being influenced (damaged) by chess engines/AI. The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.
So I would prepare for this in the RTS community.
|
On July 29 2019 05:20 Muliphein wrote:Show nested quote +On July 29 2019 04:12 Cyro wrote:Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it. I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game. The same issue will never happen in RTS because RTS isn't a game where a chess engine/AI will come up with novel creative ideas or different ways at looking at things considered inferior/refuted.
What? How do you come up with this claim?
|
RTS are games of execution. In chess, there are positions that are objectively winning but the win is really hard if not impossible to find (doesn't matter if you mean human or engine/AI). In SC2, this doesn't happen. It is straightforward to count economic input and to count army strength (and you assume they perform optimal in a battle).
There are situations that are like bifurcations/double edged, like a base trade scenario. There it can remain unclear what is the right call for a long time, until it has completely unfolded. But in general, in SC2, things are 1 dimensional. In SC BW, things are a bit different and more complicated because things are more positional. People have understood this for a long time, which is why we had the debate about automation when SC2 was announced (and we all know which side was vindicated). SC2 is a game with less strategy and less demands on execution and this was by design.
And the second reason is the very strong AI we have right now. It beats top players. How well it beats them and how well humans can exploit general AI trends (rather than finding a blind spot in a specific AI and exploiting that) is an open question. And it does so in a boring straightforward manner.
So this AI research seems to support these views we in the community already had about the nature of RTS games and the nature of SC2 itself.
|
On July 29 2019 04:12 Cyro wrote:Show nested quote +Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it. I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game. Not just that. They withheld games, only originally shared a few wins and they put Stockfish on 1 move a minute, which extremely handicaps it from making deep enough calculations. And they played the equivalent of a super computer versus a good desktop. The games were beautiful. The setup... completely unscientific.
They later corrected this was a 1k game match on comparable hardware. Stockfish did better (5 or 9 wins) but still got stomped.
Just remembered, they did this ladder approach on the Go chinese server before throat punching the S.korean world champion.
Unlike chess, Go did not have a computer overlord until alphago.
Maybe after rustling so many feathers in other communities has caused them to listen more. Who knows.
For anyone from the project reading - anything I say that seems critical it is because I am a big fan of the project and I am enthusiastic for the work you do.
Exciting times.
|
On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors.
![[image loading]](https://i.imgur.com/1a5i8UQ.jpg)
The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM.
The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.
|
On July 29 2019 08:22 Inrau wrote:Show nested quote +On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs.
Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem?
All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies.
Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)
|
On July 29 2019 08:47 Muliphein wrote:Show nested quote +On July 29 2019 08:22 Inrau wrote:On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs. Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem? All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies. Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.) Because having a mouse trail is part of playing starcraft. What I am saying is that AlphaStar is playing essentially with 3 keyboards and mice. If we were playing an XBOX RTS where you didn't micro and only moved the camera around with preset controller actions commanding the squads, I would buy your argument.
And I think your smartass comment about massing stalkers, forgot that over time players adapt and defended 4 gates by squeezing out an immortal, or whatever the meta changes to. If the game was so simple, AlphaStar would have already found the exact build and rolled over everyone. But because the game is so complex and massive, they have to potty train the AI to act like humans, because without it, it cant function.
|
You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.
On July 29 2019 08:47 Muliphein wrote:Show nested quote +On July 29 2019 08:22 Inrau wrote:On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs. Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem? All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies. Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)
|
So Alphastar is not truly playing SC2 because it isn't using a (virtual?) keyboard? If you want to hold that position; fine. But I think it would be a waste of time to debate against that.
So the AI is doing something that resembles playing SC2 and in doing so it is solving an open AI problem.
You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition? That's not at all what I see. I see it play straightforward games strategically and I see that while often wins, it is still making mistakes in microing and engaging battles. But that most of the time, micro, macro, and deciding when to fight are superior to that of its human opponents so that it mostly wins anyway. And in that the games it loses, the human player is able to find a weakness or blind spot and exploit it, leaving the AI to repeat the same mistake over and over again.
Yes, SC2 is a game with a huge game state and input space. And that causes problems for machine learning. Which is why it is meaningful that Deepmind is able find a way to beat strong humans (and why it doesn't matter that the AI looks stupid or exploitable as long as it is winning.) But this complexity (it is not actual 'complexity', it is complicated in having a huge phase space. Complexity is when a small change can completely upturn an outcome and that is rarely the case in RTS) you speak of and 'outhinking the human player using superior strategy humans were unable to conceive' are completely disconnected.
The actual issue is if the style of play it has right now can be streamlined to beat the top players. Or if neural networks are fundamentally incapable of outplaying humans because of a teachnical limitation (for example treating the game essentially as a Markov chain, ignoring the game history).
|
On July 29 2019 09:21 cha0 wrote: You sound like the type of person who would think it is fair to plug in a keyboard and mouse to your xbox and play fps against others using standard controller. It is not that people can't accept that ideal play is perfect micro rushes, it's that that type of strategy and play really isn't interesting. It's something humans can never emulate, and doesn't show that the AI is really strategically learning anything. You could program a bot without deeplearning to just rush and have perfect micro, no fancy models or algorithms required.
How can you say something like this after I said that there cannot possibly be such a thing as fairness in humans Vs AI.
But you do admit that you think the way an AI plays SC2 isn't really interesting to you. Why do people have this strange idea? There is a reason why in general people avoid using chess engines while comnentating chess game. What the engine sees is usually completely irrelevant for what is going to happen on the board exactly because the AI plays in a way humans cannot emulate. And the AI move suggestion also tell you nothing about the strategic themes in the game.
So your argumentum as absurdum is exactly the state of AI in chess.
Then you end your post with an utterly false statement. Yes, in principle you could. But no one has because it is extremely difficult. You act as if Alphastar does something all AI always already capable of while claiming it will teach us new things about the game. Did you even read my posts? This is exactly the misunderstanding I argued against before you replied.
|
On July 29 2019 08:47 Muliphein wrote:Show nested quote +On July 29 2019 08:22 Inrau wrote:On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs. Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem? All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies. Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.)
If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.
In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.
Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.
After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?
|
I love AlphaDepot micro where it blocks its own units out. I wonder if it thinks that depots are a good way of making a jail for the enemy army or something.
|
On July 29 2019 09:24 Muliphein wrote: You think that Alphastar is losing games because while it is fighting out battles perfectly, it is using the wrong unit composition? That is correct. It has no idea what to do besides learning the builds from other players and microing "perfectly." Alphastar would get wrecked if players played against it over and over and over like some sort of INSANE AI challenge. I see nothing special at this point. EDIT: Even with the advantage alphastar has APM / vision wise.
|
But clearly it is making a lot of mistakes in the micro and battle engage department.
And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless.
On July 29 2019 09:41 Xain0n wrote:Show nested quote +On July 29 2019 08:47 Muliphein wrote:On July 29 2019 08:22 Inrau wrote:On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs. Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem? All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies. Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.) If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task.
So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won.
Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about.
In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.
And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human.
Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.
SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing.
After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?
No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others.
|
On July 29 2019 11:53 Muliphein wrote:
SC2 isn't a game of strategy. It is a game decision making and execution.
lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point.
The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar. Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.
|
Well, even with how inhuman it is, it's going to lose a lot more once people realize it's an opponent that doesnt scout.
|
On July 29 2019 15:20 loft wrote:Show nested quote +On July 29 2019 11:53 Muliphein wrote:
SC2 isn't a game of strategy. It is a game decision making and execution. lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point. The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar.
There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness.
Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.
This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar.
But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent."
And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well.
|
On July 29 2019 15:43 Muliphein wrote:Show nested quote +On July 29 2019 15:20 loft wrote:On July 29 2019 11:53 Muliphein wrote:
SC2 isn't a game of strategy. It is a game decision making and execution. lol, hey just wanted to chime in here. Muliphein seems to make the machine learning point of utilization of the power behind deepmind. While the developments being made in ML are incredible, I think you're missing the counter-point. The people who want mouse trails, Muliphein, seem to be interested in the fairness of alphastar. There can be no such a thing as 'fairness' in a match between a human and a machine. They are different entities. Either you don't have a match because it would be unfair. Or you have one and shut up about fairness. Show nested quote + Why would they be concerned about this you might ask? Well, it's probably because the deepmind team keeps running marketing material stating how the AI is able to beat human players, how it is running on the ladder vs humans (and they will brag about how well it performed). If you are a player of a game that deepmind has "beaten" then you probably want it to be across an even playing field. Machine learning aside, it seems disingenuous to advertise deepmind as this triumphant algorithm when it's essentially brute forcing wins with inhuman tactics.
This doesn't really matter because eventually the AI will be able to beat top players. As of today, Deepmind didn't do the big game to show their AI beats mankind at SC2 yet. Obviously, it is still a work in progress. Can you please just wait for that? If the AI loses that and Deepmind still claims their AI won, then you can complain. Or, if we are a year from today and we haven't heard anything more about Alphastar. But my suspicion is that even if Deepmind comes out with a stronger version, challenges the top SC2 player, beats that plyer convincingly, there will still be people here claiming "Yeah, but if you let a bunch of top players play against Alphastar over and over, eventually they will find a way to wreck it every game." (and they may very well be correct) "... so Alphastar doesn't really understand the game, doesn't come up with strategies and just brute forces the game and isn't really intelligent." And then Deepmind will move on and people in SC2 can grasp on to their delusions and move on as well. Wasn't the point of this project to get AI that can solve problems? Having inhuman micro is not solving problems.
It is like sending you to fight Superman. Superman will learn nothing beating your 1 000 000 times while all you might eventually do is somehow find kryptonite and beat him without it ever being a fair fight.
|
France12761 Posts
On July 29 2019 11:53 Muliphein wrote:But clearly it is making a lot of mistakes in the micro and battle engage department. And you saying that 'it has no idea' when it is a neural net and 'learns builds from other players' when it is trained by playing against itself, makes any further debate useless. Show nested quote +On July 29 2019 09:41 Xain0n wrote:On July 29 2019 08:47 Muliphein wrote:On July 29 2019 08:22 Inrau wrote:On July 29 2019 03:19 Muliphein wrote: The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
AlphaStar does not have to box-select units to move. The AI does not have any mouse trail so to speak. All players paint the map with their cursors. ![[image loading]](https://i.imgur.com/1a5i8UQ.jpg) The limitations are nice, locking the actions to a camera, lowering the APM. But Alphastar can still do things at 120APM that would take a human 600 APM. The AI might as well be playing with three keyboards and mice. Not to mention the clicking one pixel in the top right corner to select a building or a unit it needs. Yes, the AI is playing the same game, but without inherently human limitations. I have no idea what the point is that you are trying to argue. You think it is unfair for an AI to solve an AI problem if human limitations aren't hard coded in? Do you also think a chess AI needs to be forced to take a piss break because humans will I evitably have to do this as well under standard time control? Where draw the line. Why don't you support the view that for any AI to beat a AI problem, it needs to solve the problem by modeling a human brain solving the problem? All this comes from the delusion that people believe SC2 is richer and more intellectually pleasing than it actually is. People cannot accept that the ideal play is to build mass roaches/stalkers and combined with perfect macro, micro and deciding when to engage. So the AI needs to be limited to play more like a human, and then the AI wil either lose to humans, or finally come up with genius elegant strategies. Yet, all the facts we have yell at us the opposite. So please stop bringing up 'fairness' because there cannot be such a thing. As long as the AI doesn't get units with more hp or free resources, or the ability to see through the FoW, it is playing the wrong game. And when it seems stupid because it doesn't truly understand what is going on in the game, but it is beating all the best human players (and yes we are not quite there yet at all), maybe then you guys will accept that 'understanding the game' doesn't really matter for winning. (And let me note that Alpha Zero (chess or go) also don't really understand the game. They just happen to take the correct action). They cannot explain to you why they do what they do and it requires quite a bit of effort from Deepmind engineering to figure that out.) If this is Deepmind's goal with Starcraft 2, they are wasting time and money. If they believed, as you seem to, that beating every player with inhuman macro and micro would be the right way of playing Sc2, I don't know why they would use a neural network for the task. So because this disappointed your intellectual curiosity, for something that likely isn't even there to begin with, Deepmind is wasting their time and money when in fact they set up an RTS game, up to now played by only a bunch of scripts, as a math problem that gets solved by their neural net architecture and training methods, which generalized very well to similar real-world problems. Yeah, that makes sense. I my field of biophysics, Deepmind has a neural network that does better structure prediction of protein folding than any of the existing algorithms. And that specific competition has been running since 1994. Deepmind entered it last year for the firs time and immediately won. Do you know how much money is invested in drug development that involves protein folding or protein protein interactions each year? You have absolutely no idea what you are talking about. Show nested quote + In Go or Chess, understanding or not the game, Alpha Zero takes the correct action that would require a human mind to think and make a decision and that makes it extremely interesting; an unlimited Alphastar abusing its infinitely superior mechanics would be pointless as it would just execute actions impossible for humans to replicate and even analyze.
And in SC2, Alphastar makes micro decisions superior to all humans and beats most humans, even before they finalized their version to challenge the top player. And in Chess/Go Alphazero sees patterns impossible to see by a human. Show nested quote + Forcing Alphastar to play like a human as much as possible is meant to stress out its capability of winning the games via "decision making" or "strategy"(it doesn't matter if it doesn't perceive as such, we would be able to regard the outcome as it were), which is indeed the ambitious and interesting part of the project.
SC2 isn't a game of strategy. It is a game decision making and execution. Deepmind is only making their AI 'play like a human' to not offend the SC2 community too much. Alphafold also doesn't fold proteins 'like a human'. It solves the problem. And in SC2, that problem is winning the game. Not 'coming up with strategies that please Xainon. And this is achieved through superior micro, superior macro, superior multitasking, and superior battle engage decisions. Not through hard countering the enemy's build or trying to trick your opponent into hard countering something you aren't actually doing. Show nested quote + After reading your last answer, I get that you are interested in knowing if neural networks can reach by themselves the very point where their mechanics become impossible for humans to hold? Is that so?
No. All I care about is to see how well they are able to develop the strongest playing AI possible. Not an AI that can pass a Turing test through SC2 play. And in the mean time, I get annoyed by people who for emotional selfish reasons decide to deliberately misunderstand SC2 (I assume you aren't truly ignorant) and be too lazy to learn the basics of ML and deep neural networks while still believing their misunderstandings about the nature of Alphastar is worthwhile for others to read. others. Why are there so many low count posts acting superior while spilling semi bs about how AI works on these DeepMind threads? That was the same on the other thread.
I’m pretty sure (idk if it’s that way for these ladder agents tho) that AlphaStar used imitation learning at the beginning so it indeed used human replays, not only self play. That was why they guessed it spammed clicks because of humans doing so.
|
|
|
|