|
On July 28 2019 09:30 AttackZerg wrote:Show nested quote +On July 28 2019 08:59 Xain0n wrote:On July 28 2019 06:57 AttackZerg wrote: I am really enjoying the thoughtful posts here. Thanks for the read.
I will disagree with the poster above me, the last statement.
Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.
It is unique but.... it is far from the stage where we can make that claim.
Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.
The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.
I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.
Alphago and alphazero were not required to lower their skillsets to match the playerbase.
I think they had catered too much to us and have thus moved away from their most historic achievements.
First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.
First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.
I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.
No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.
God mode full speed zerg please. 8)
(I am thankful for this great project and the people behind it. Cheers)
That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations). I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once. I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games. In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously. In those genres, the goal was world dominating AI. They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match. They gave us a glimpse of True AI at the start. In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also. I love the project but am not convinced or in love with their approach to sc2. In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer. It seems they are less focused on total domination in this genre. I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI.... Been waiting since 99. Still waiting. Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm.
What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics.
|
On July 28 2019 09:30 AttackZerg wrote:Show nested quote +On July 28 2019 08:59 Xain0n wrote:On July 28 2019 06:57 AttackZerg wrote: I am really enjoying the thoughtful posts here. Thanks for the read.
I will disagree with the poster above me, the last statement.
Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.
It is unique but.... it is far from the stage where we can make that claim.
Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.
The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.
I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.
Alphago and alphazero were not required to lower their skillsets to match the playerbase.
I think they had catered too much to us and have thus moved away from their most historic achievements.
First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.
First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.
I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.
No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.
God mode full speed zerg please. 8)
(I am thankful for this great project and the people behind it. Cheers)
That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations). I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once. I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games. In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously. In those genres, the goal was world dominating AI. They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match. They gave us a glimpse of True AI at the start. In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also. I love the project but am not convinced or in love with their approach to sc2. In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer. It seems they are less focused on total domination in this genre. I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI.... Been waiting since 99. Still waiting.
I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.
Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".
You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.
|
The ability to play out an endgame which even top masters know who should be winning is called 'mechanics' in chess. And AI, either traditional or deep neural networks, are really good at this.
Calculation is also kind of the analogue of (e)APM.
The ability to micro a battle is a difficult AI problem. Just ask the people programming AI in BW using BAPI. What they do is actually simulate the outcome of a fight, and then decide if your side is winning. And if your side is winning, you keep attacking. This is not a very good solution. But it is a really hard AI problem to crack.
Problem-wise, having an AI build corruptors when the protoss has a big air army is not a hard AI problem.
I understand that the way these AIs play is not optimal in the phase space of all possible plays. What I mean that it is the optimal solution found when converging the matrix of weights. It is obvious that it is converging to weights that win a lot of games. And it converged to these weights and not to others. So in that sense, this is the optimum machine learning finds.
In the phase space of all playstyles, some playstyles will be islands of good playstyles surrounded by a vast sea of bad playstyles. Machine learning has a huge difficulty finding these islands, because when it is in the sea there is no reason to assume these islands exist, let alone where exactly they are. So it cannot use a gradient descent-type algorithm to converge on them.
An AI by definition is a tank playing rugby, a F1 car doing a 400m sprint, a supercomputer playing chess. The challenge is in building the tank, building the F1 car, building the AI. Given enough time, technology, and resources, any machine is invincible by definition. This is already a known fact.
If you don't understand this, it is not a matter of agreeing or disagreeing, listen or read Kasparov on Deep Blue and computer chess.
|
On July 28 2019 07:07 NinjaNight wrote: ? You mentioned yourself it doesn't reason or deduce because it's a number crunching neural network. So naturally it's not going to be able to take advantage of scouting which requires high level reasoning to be useful. Of course scouting is not going to increase its winrate.
This is not entirely true. They trained the neural net to first copy top human players. Most of them send out a worker. So when an initial randomly generated neural net randomly sends out a worker onto the map, it more closely resembles the replays it is trying to copy, and this neural net will be selected and the algorithm will adjust the weights that caused this desired trait even more in the same direction. But when the AI sends out two workers, it is not matching the replays as well as when it sends out one worker. And when the worker reaches the enemy base, it also more accurately matches the replay.
Now in PvP, when DTs are build you are going to lose if you have no detection. The AI doesn't know that the dark citadel means that there are going to be DTs. But neural nets that build observers upon seeing the dark citadel are selected for. The AI doesn't know why, but still it happens and the AI ends up making observers.
So now we have a scouting AI that builds observers when its probe sees the dark citadel. But what the AI does not do is think about if its goal was achieved. With a given game state it crunches the numbers and this leads to a certain input. The AI will not scout until it has determined what the opponent is doing, which is what a human would do.
You can create an AI that tries to narrow down exactly the build of the opponent, guesses exactly the amount of units its opponent has, and will guess the future tech tree and unit composition. But that requires you to artificially force the AI to do this.
What we have learned now is that machine learning does not favour scouting. It not scouting really meant you will definitely lose, all AIs would continuously scout, because they have learned this behavior by copying replays. They do not. Yes, this says something about the limitations and nature of the machine learning they are using. But it also says something about the game.
It's also still far below pro level and it still has very little intelligence and mostly relies on efficient mechanics. It's not telling us anything yet about how Starcraft should be played.
Yes it is. It is telling us that strategy and mindgames are not important and that macro, mechanics, deciding when and where to fight, and micro are the quality that decides if you win or lose.
|
On July 28 2019 19:42 Muliphein wrote:Show nested quote +On July 28 2019 07:07 NinjaNight wrote: ? You mentioned yourself it doesn't reason or deduce because it's a number crunching neural network. So naturally it's not going to be able to take advantage of scouting which requires high level reasoning to be useful. Of course scouting is not going to increase its winrate.
.... What we have learned now is that machine learning does not favour scouting. It not scouting really meant you will definitely lose, all AIs would continuously scout, because they have learned this behavior by copying replays. They do not. Yes, this says something about the limitations and nature of the machine learning they are using. But it also says something about the game. Show nested quote + It's also still far below pro level and it still has very little intelligence and mostly relies on efficient mechanics. It's not telling us anything yet about how Starcraft should be played.
Yes it is. It is telling us that strategy and mindgames are not important and that macro, mechanics, deciding when and where to fight, and micro are the quality that decides if you win or lose.
How are you going to mindgame it when you're playing a bo1 on the ladder? People don't know theyre playing against a monster who can only macro/micro and not think.
...if players know which agent they're playing they're going to have insane winrates vs it, despite it's micro/macro advantage, unless an agent can actually learn to play the game rather than just do a generic strong build, learn when to scout, what to scout, and how to react. Otherwise it's only chance is to randomize which agent is playing and that is still probably not going to be enough to face the likes of Serral/Maru....
|
On July 28 2019 18:01 skdsk wrote:Show nested quote +On July 28 2019 09:30 AttackZerg wrote:On July 28 2019 08:59 Xain0n wrote:On July 28 2019 06:57 AttackZerg wrote: I am really enjoying the thoughtful posts here. Thanks for the read.
I will disagree with the poster above me, the last statement.
Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.
It is unique but.... it is far from the stage where we can make that claim.
Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.
The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.
I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.
Alphago and alphazero were not required to lower their skillsets to match the playerbase.
I think they had catered too much to us and have thus moved away from their most historic achievements.
First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.
First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.
I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.
No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.
God mode full speed zerg please. 8)
(I am thankful for this great project and the people behind it. Cheers)
That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations). I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once. I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games. In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously. In those genres, the goal was world dominating AI. They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match. They gave us a glimpse of True AI at the start. In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also. I love the project but am not convinced or in love with their approach to sc2. In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer. It seems they are less focused on total domination in this genre. I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI.... Been waiting since 99. Still waiting. Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm. What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics.
Maybe you are right and without limiting the approach, you get a billion bots that have no sense of logic or game play and they never progress strategically.
As I said in an earlier post, I am not an expert in this field (or any) and maybe I'm dreaming of seeing something that is not possible or coming in the near future.
So far micro has been the only impressive thing I have seen... Is our game just that complicated that a billion games and a neural net still can't get past micro and the rock, paper, scissor of build orders? Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp?
|
United Kingdom20161 Posts
Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp?
This mindset is asserting that the "human hard counters" are better than just building a bunch of stalkers and killing the person trying to do X strategy which is not necessarily true.
|
youtu.be
A game between Reynor (offracing T) and Alphastar Zerg.
Pretty funny game.
|
so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet
|
On July 28 2019 20:38 AttackZerg wrote:Show nested quote +On July 28 2019 18:01 skdsk wrote:On July 28 2019 09:30 AttackZerg wrote:On July 28 2019 08:59 Xain0n wrote:On July 28 2019 06:57 AttackZerg wrote: I am really enjoying the thoughtful posts here. Thanks for the read.
I will disagree with the poster above me, the last statement.
Since no professional sc2 players are managing the project and the bots are clearly levels below are top human players.... I think calling any of alphastars choices optimal is both premature and wrong.
It is unique but.... it is far from the stage where we can make that claim.
Different topic - It seems to me that the apm limit has had a detrimental effect that is manifest in scouting.
The zerg spreads creep poorly, often injects badly and had no concept of overlord placement in the early midgame, the strategies imho are the incestuous offspring of the decision to speed limit.
I see 350-550 apm range for most top humans. I think at a certain level if you arent 300+.... you are unable to properly manage zerg.
Alphago and alphazero were not required to lower their skillsets to match the playerbase.
I think they had catered too much to us and have thus moved away from their most historic achievements.
First make a bot that can beat serral, maru and classic. Then consider the ethics and fairness of the approach.
First they smashed Stockfish with very unfair parameters then months later (maybe a full year) they released a full match with a more level playing field.... even better victories for alphazero (almost non for black) and even some loses.
I am not an expert in AI or anything else, just watching them treat this playerbase very differently and not shockingly, this different treatment has yielded a less historic or dominant AI.
No human will ever beat alphago or alphazero. No reason to limit the game approach by human limitation. Like I said, add that nonsense later.
God mode full speed zerg please. 8)
(I am thankful for this great project and the people behind it. Cheers)
That's because Chess and Go don't have huge mechanical requirements to be played, unlike Sc2; we already had a glimpse of a machine capable of beating top players without limitations, AlphaStalker(which indeed DID have limitations). I don't think it would worth to spend time and money on a neural network that would beat humans macroing with perfect timing while microing with immaculate precision every single unit on the map, all at once. I think defining AI and limiting it based upon human capabilities creates a worse network and is counter to the approach they used in other games. In chess and go, the mechanics are internal. They didn't limit the depth of thinking to the 7 to 11 short term objects a human can hold simultaneously. In those genres, the goal was world dominating AI. They beat non top tier players on a specfic map. Very impressive but not near the accomplishments of beating Stockfish even in that first unfair match. They gave us a glimpse of True AI at the start. In chess the only game for alphazero is other machines. It is fine if that becomes true of this sport also. I love the project but am not convinced or in love with their approach to sc2. In chess and go, they gave no fucks about the respective communities and cultures - they wanted to rip down and conquer. It seems they are less focused on total domination in this genre. I accept, I may be wrong in both understanding and or communicating. Still just want to see a god zerg AI.... Been waiting since 99. Still waiting. Because you dont need some super AI, to have insane mechanics and beat everyone, if you didnt limit its apm, it could just mass mmm and murder everyone with insane micro and 5000apm. What they are trying to do, is create AI which "OUTPLAYS" human opponents with similar mechanics. Maybe you are right and without limiting the approach, you get a billion bots that have no sense of logic or game play and they never progress strategically. As I said in an earlier post, I am not an expert in this field (or any) and maybe I'm dreaming of seeing something that is not possible or coming in the near future. So far micro has been the only impressive thing I have seen... Is our game just that complicated that a billion games and a neural net still can't get past micro and the rock, paper, scissor of build orders? Is there any game yet, where AS directly and intent-fully hard counters a build or unit comp? I'm pretty sure in a game such a starcraft 2 the AI is going to use the build that has the highest winrate at the highest frequency but you can't do anything about the rock paper scissor of build orders, there is some variance in this game and you have to embrace it, it makes it fun actually
|
The purpose of the AI research is not to "beat humans" or "accomplish task X". We've done that..
It's essentially to be able to make a self-learning AI that can "solve problems". Why Self-learning? --> Because there are problems we humans don't even understand (or haveexperience in yet), and we'd hope the AI would be able to solve it. (I am not talking about SKYNET here).
The problem is always, that you need something to measure your product against. How good is it? When should we stop? How much power is required to train it? Chess and Go were good challenges, because the games themselves are simple with perfect information; AND humans are amazing at them. At the same time, the game have enough possible variations that it is not solvable by a brute-force approach.
Imagine using AI's for selfdriving cars; Driving is easy. But what if a moron drives too close to you? or the car in front drives in the middle of the road? or in your side of the road? What if a deer runs in front of your car. What if you get hit by a bird. Imagine you get hit by a bird and the AI goes like this: "uhuh weird sensor reading, unexpected error, abort abort .. " and drives off the road. or you program it to ignore such, and it hits a person: "weird sensor reading.. oh well nevermind"
Back to start Starcraft AI; Humans are really good at starcraft, Obviously macro and micro helps a lot, but our main strength is being able to solve problems (or attempt to solve problems). Starcraft is a complex game with imperfect information and many many many problems to continually solve.
This is why it is interesting to test the AI's against humans in this area. We are sufficiently good at the game, to be worth competing against (for problems solving skills). Here we have a measure of "how good did we actually become". Winning with 1500eAPM stalker micro is not solving problems. Figuring out what to do against an opponent who stalker rushes you and then goes "mass void rays" IS.
I think much of the "disappointment" many are feeling, is not that the AI is unbeatable.. but that they're so EASILY "abusable". It doesnt understand what AIR is, or where it is (bile drops). It doesnt understand that turrets can have upgraded range It doesnt know what a Widowmine is - even when its visible
As for worker scouting; it is not necesarily important (if you are doing aggressive strategies, you are getting information all the time and you can infer A LOT from it). But not scouting at all and doing a blind build and not adapting to what it eventually sees, is not problem solving :/
It is "funny" however, because we humans think scouting is the EASIEST way to solve the problem of "what to build" and "when to build", so it's amazing that the AI is still so 'dumb' and 'unrefined' and still doesn't use this "easy" way of overcoming that obstacle.
|
On a different note; the AI is playing 9 different match ups. (+random?)
And they likely have several 'Agents' for each matchup (lets imagine they are testing 5 different ones)
That's 54 games just to play 1 game in each matchup for each agent.
They might even cycle Agents on the same account. So if you played a PvP against the AI, the next PvP might actually be a different Agent/Net.
We don't know much of anything :D
|
On July 28 2019 21:46 ROOTFayth wrote: so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet
I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated!
The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.
|
On July 28 2019 19:27 Muliphein wrote: The ability to play out an endgame which even top masters know who should be winning is called 'mechanics' in chess. And AI, either traditional or deep neural networks, are really good at this.
Calculation is also kind of the analogue of (e)APM.
The ability to micro a battle is a difficult AI problem. Just ask the people programming AI in BW using BAPI. What they do is actually simulate the outcome of a fight, and then decide if your side is winning. And if your side is winning, you keep attacking. This is not a very good solution. But it is a really hard AI problem to crack.
Problem-wise, having an AI build corruptors when the protoss has a big air army is not a hard AI problem.
I understand that the way these AIs play is not optimal in the phase space of all possible plays. What I mean that it is the optimal solution found when converging the matrix of weights. It is obvious that it is converging to weights that win a lot of games. And it converged to these weights and not to others. So in that sense, this is the optimum machine learning finds.
In the phase space of all playstyles, some playstyles will be islands of good playstyles surrounded by a vast sea of bad playstyles. Machine learning has a huge difficulty finding these islands, because when it is in the sea there is no reason to assume these islands exist, let alone where exactly they are. So it cannot use a gradient descent-type algorithm to converge on them.
An AI by definition is a tank playing rugby, a F1 car doing a 400m sprint, a supercomputer playing chess. The challenge is in building the tank, building the F1 car, building the AI. Given enough time, technology, and resources, any machine is invincible by definition. This is already a known fact.
If you don't understand this, it is not a matter of agreeing or disagreeing, listen or read Kasparov on Deep Blue and computer chess. This is post-hoc reasoning, though. Of course the analogy between an F1 car "sprinting" with an AI playing chess are analogous, but only because we have built an AI that is good at chess. We've known for hundreds, if not thousands, of years that we can build machines that do certain tasks better than humans, e.g. watermills and windmills do a much better job at grinding wheat than humans do. That machine superiority in many domains started to encroach on one of the areas humans were undoubtedly superior in the 1940s: intelligence.
In the 1940s, we built the first machines that could compute faster than humans can (I guess the real credit here might even go to Babbage, but it's not really til the 1940s that these machines undisputably beat humans, and not until the 1960s until they were generally programmable for a large array of computation tasks. Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times, as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task. But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.
At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.
|
This is super interesting. Having seen some replays of how it plays on the higher levels, it seems to have improved a bunch in terms of not being just a micro bot like last time
|
On July 28 2019 22:37 Slydie wrote:Show nested quote +On July 28 2019 21:46 ROOTFayth wrote: so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated! The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.
It had to be pre-trained because the action space is far too large for pure reinforcement learning (with our current abilities). It could go a million games without learning a reasonable response to an event because there are too many responses to try.
|
On July 28 2019 18:50 Acrofales wrote: I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.
Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".
You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.
We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask.
Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point.
The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
Soon after, it was expected that machines would very soon be "more intelligent" than humans. That prediction failed multiple times,
I don't think this is an accurate account of the consensus, if there was any, at that time. Decades ago, it was actually a minority that correctly recognized that the brain is a machine like any other. And that in principle a machine could be build that does the same thing as a brain, only better. Respectable scientists for a long time placed the brain outside of any biological context. General principles of biology were not applied to it. Only with the rise of cognitive science did this change.
But you are right that for the last decades it was just an issue of actually building a machine, because it proved to be quite challenging. Yes, it is true in some sense that just raw calculation wouldn't be enough. But it is very easy to calculate the phase space of Go and to then see that raw calculation was never going to solve that. And we have known for a long time that humans use pattern recognition properties of a neural network to play these games so well.
In fact, the opposite is true as people thought chess and go would be 'safe' from computers for a decade or two more than they actually were.
...as building intelligence was a harder task than we thought. We can build race cars that easily "outsprint" humans, and a tank that plays rugby also seems like a simple engineering task.
This is besides the point, but I beg to differ. Doing complex tasks is quite challenging for robots. It would be extremely challenging to build a robot that a human top rugby player could control using some VR interface (like in Avatar) that would allow for a similar level of play as the actual rugby player playing himself. We are decades off from that. But you were actually trying to make another point. So be careful with your language.
But until very recently, Go seemed unsolvable, let alone games with uncertainty and incomplete information. Breakthroughs in AI research put this into reach now, and the interesting part is obviously not in beating a human at doing lots of clicks very fast. The challenge is in dealing at least as well as the human with uncertain and incomplete information without relying on an ability to click faster and more precisely.
So which one is it? Did we take way longer to solve these games? Or did we do it earlier than expected?
At least, that is the challenge AlphaStar is interested in. No doubt perfect micro is a different challenge with its own interest.
Perfect micro is an AI challenge. Not a 'how fast can I issue commands through an embedded systems interface'-challenge. That it is not the AI challenge most people are interested in, for the simple reason that it learns human players nothing new about the game, is besides the point.
It may be the case that in SC2, unlike in chess and go, an AI can play way way above the best humans without doing anything that humans hadn't realized or discovered themselves.
This all comes back to one important point. RTS games are games of execution and small scale decision making(tactics). They are not games of strategy. And their complexity is quite basis. There aren't layers upon layers that reshape how the game is played as you ascend the skill curve. Yes, the move space is huge and sparse, but in essence it is a straightforward game. Build an army stronger than your opponent, then force a fight and win the game. That's the entire game in a nutshell.
On July 28 2019 22:37 Slydie wrote:Show nested quote +On July 28 2019 21:46 ROOTFayth wrote: so it started off by copying top human pros? it didn't start from scratch? it would be stronger if it didn't copy human pros I think but I assume the hardware needed for that doesn't exist yet I think they tried that but failed miserably. The untrained AI would do stuff like getting lost on the map never to return to its own base and sendig workers randomly around the map. The "rules" of Starcraft are actually very complicated! The AI was very disappointing dealing with counters, like keeping building tanks vs mass carriers, but it did come up with some interresting strategies (banshees instead of medivacs) and some nice harassment.
This is not quite correct. Yes, they had to train the AI to copy human replays first. But once they trained the AI to win rather than copy, the phase space was still equally large. The issue is not that it is large. It is large. A large phase space makes it more difficult but isn't the main property to be worried about. The landscape of the phase space is that matters. If you are placed randomly in the phase space and it has a definite curvature around where you are, you at least know in which direction to move. But if the landscape is completely flat and looks identical in all directions, you have no idea where to move and you will just be randomly wandering around.
If you have two random neural nets 'playing' against each other, they will be issuing random commands. Those very few that happen to be able to build more workers or a pylon, they are closer to proper play. But with a win condition as the objective they perform just as well as a neural network that does absolutely nothing. So the training won't be able to converge because of the objective. But if the objective is to copy replays, then these is a much more gentle and gradual progression in how closely an AI matches a replay. So this is a superior scoring function.
Personally, I think they should just have tried to first emulate their neural networks to copy a simple script that just builds workers, supply, and marines/zerglings/zealots and moves them to the starting location of the enemy. Not top level players. But maybe for that the phase space was too big. It might be that they thought the final result would be the same, but using top player replays would just be a faster method.
|
On July 29 2019 03:19 Muliphein wrote:Show nested quote +On July 28 2019 18:50 Acrofales wrote: I disagree about "internal" mechanics for Go or Chess. That is simply part of the "intelligence" needed for playing those games well.
Moving your hands 5000 times a minute with unerring precision isn't a part of the "intelligence" needed for playing starcraft, it's a limitation of the human body, moreso than the human mind. Thus limiting the apm makes it more interesting as a challenge in creating "artificial intelligence", rather than just an artificial sc2 champion. In Go and Chess, the benchmark for intelligently playing those games was simply to beat the best human opponents. In SC2 the benchmark for intelligently playing the game is to beat the best human opponents with a similarly restrictive "interface".
You wouldn't say an AI had solved rugby if it was built like a tank and had an internal compartment to hide the ball, so all it has to do was obtain the ball a then ride invulnerable to the back line. It'd be invincible, but not in any interesting way.
We are now trying to make a machine that is intelligent. In a philosophical sense, that is no different from making a machine that runs fast on wheels or that generates a lot of force. APM isn't limited by the human body. It is limited by the human mind. People cannot think fast enough and cannot think in parallel at all. Research shows that humans basically do not multitask. Making a machine that is able to come up with 2000 actions a minute IS exactly like building a car with 2000 horsepower. Humans only have about 0.1 horsepower. So the machines win there with a way bigger margin. That this is not the type of intelligence where humans traditionally beat out machines is besides the point. The AI has exactly the same units as the player has. So saying the AI is playing rugby with tanks rather than human players is a false analogy. The analogy works, any analogy works up to a point, but it shows exactly why what AlphaGo is doing is fair. Not why it is unfair.
I get both sides but in an RTS, I don't think an AI changing views every millisecond and microing with 40k apm would be interresting on the SC2 ladder. Sure, they could do it, but it would test very different skills than the more limited version which competes with humans on a more reasonable mechanical level and is forced to scout, react and position itself rather than just learning a strong push and microing superhumanly.
If you really want the unlimited bot I am sure you can request it, but I think it is a given it will easily beat any human fairly easily.
|
On July 29 2019 03:33 Slydie wrote: I get both sides but in an RTS, I don't think an AI changing views every millisecond and microing with 40k apm would be interresting on the SC2 ladder.
I get that it wouldn't be intellectually satisfying to many in the player base. But it would solve a currently unsolved AI problem using a generalized method. There are many real-world AI tasks that would benefit from this. You wouldn't limit an AI doing air traffic control.
Sure, they could do it, but it would test very different skills than the more limited version which competes with humans on a more reasonable mechanical level and is forced to scout, react and position itself rather than just learning a strong push and microing superhumanly.
People say that, but these new AIs are severely limited but still don't play the 'scout and counter' game some people wanted the AI to play.
If you really want the unlimited bot I am sure you can request it, but I think it is a given it will easily beat any human fairly easily.
I can request anything I want. But I think you meant to say that Deepmind will give SC2 players what they want. Which is a bit amazing to me. Are they handing out specific Go AIs to fans of Go? Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.
|
United Kingdom20161 Posts
Deepmind is very careful of what the community of the game they are trying to beat things of them. They are focused on public opinion. But once they think they have learned from SC2 what they needed, they will move on and that will be it.
I heard that parts of the chess community were pretty upset about them trashing chess engines and then leaving again without exploring what it could mean for the game.
|
|
|
|