Blog post : deepmind.com (featuring coments from Mana, TLO, and Kelazhur)
Furthermore - Oriol, the tech lead of AlphaStar at DeepMind, is coming to give a tech talk at Imperial College in London (open to all, free registration www.eventbrite.co.uk), where I research reinforcement learning. If you have any questions about the inner workings or the future of AlphaStar, please post them in this thread and we will try to accomodate them during the Q&A session.
This thing is so exciting to me. I view it as, effectively, the fourth generation of machine learning. Obviously generations within machine learning is very subjective. This is the fourth time that a world class AI is going to do battle with the best in the world at a game. This same concept beat the best humans at chess, then Jeopardy!, then Go is making its way up to the big leagues against the best in the world at StarCraft II. Obviously it still has some work to do, but making grandmaster as all of the races is a significant step.
I can't wait for the moment in the future when it squares off against whoever is the best in the world at that time. My money is on AlphaStar.
On October 31 2019 06:32 Duckman wrote: I'm really interested in what the unique strategies AlphaStar employs are--anyone have any info?
From what I saw earlier, not too much, but there were speed banshees and an insane roach/ravager 3base push.
It is not easy to compete with perfect macro+decent micro and decisionmaking.
There were some holes like walling off, countering air armies and transitioning to the lategame which can be exploited by players who know what they are up against.
So... uh.... .how much data have alpha star skewed the balance team's data point ? Is any of the recent zerg op issue was partly caused by Alpha star's terran and protoss is way stronger than normal human at the same level ?
Deepmind must be burning through cash on this Starcraft project. Their losses were $570 mil in 2018, and $370 mil in 2017. I feel they could well run out of money very soon, unless they manage to find a commercial use for this type of research.
On October 31 2019 06:06 Jathin wrote: I found the footnotes to figure 2 really funny:
B: MMR rating of AlphaStar Final per race (from top to bottom: Protoss, Terran, Zerg) versus opponents encountered on Battle.net (from left to right: all races combined, Protoss, Terran, Zerg). Note that per-race data is limited; AlphaStar won all Protoss versus Terran games.
Now you're gonna see a post on Blizzard forums "PROOF ONCE AND FOR ALL!!!"
If I'm understanding that correctly, for some reason there's only 4 PvT games of alphastar final vs players on battlenet in their sample. Which is an extremely low number for a quantitative paper so I may be misunderstanding that.
Edit: Though... It is very interesting that it won 25/30 games vs players as protoss, and only 18/30 for each of Terran and Zerg.
Edit: Yes, it actually didn't play that many games versus humans. So we REALLY can't read that much into it. Like, it also won four games of TvP (out of eight) and four games of TvZ (out of seven). Which just goes to show that we really don't know what it's PvT numbers would have looked like if there had been more of them. With that said however, the 25/30 as protoss overall definitely says something I think.
For the supervised and midpoint evaluation, each agent began with a fresh, unranked account. Their MMR was updated on Battle.net as for humans. The supervised and midpoint evaluation played 30 and 60 games respectively. The midpoint evaluation was halted while still increasing because the anonymity constraint was compromised after 50 games.
Edit: Also confirming the above. The numbers given there (25/30 as Protoss, 18/30 as Terran, 18/30 as Zerg) match the replays they attached. Those are numbers for the final version of their agent.
18/30 as Terran, 18/30 as Zerg? I would hardly call that "mastering" the game. Though given the calibre of players those final matches were against, it's still impressive.
On October 31 2019 07:17 heqat wrote: Do wa have any idea of the MMR of AlphaStar ? I'm a bit surprised they're saying that they mastered SC2 with so few published data.
6000 MMR, but it appears to be from a VERY limited sample size in terms of actual players they faced with the final version (see my above post). And Alphastar's numbers as Terran and Zerg are not really that impressive. (Winning only 18/30 games as each).
1. When selecting multiple units, human players need to drag a box, which limits how they can micro, for instance in things like splitting marines against banelings. To be fair, could you make AlphaStar drag boxes too in the future?
2. When a dropship shows on minimap, there's a ~1/4 chance a best human player won't notice it. Is this implemented in AlphaStar?
3. When a cloaked unit moves across the screen, there's a ~2/3 chance a best human player won't notice it. Is this implemented in AlphaStar?
4. When clicking on something small, there's a ~5% chance a best human player will misclick, I guess this is already implemented?
5. Will you emulate a virtual mouse in the future, so that we can watch AlphaStar's first person view in the ultimate AlphaStar vs Serral match, which is bound to happen sooner or later?
Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
From what I remember against Mana this thing only won because the primitive microbot part of its play. I recall like 2-3 screens of blink stalkers being controlled at once...I think it'd get wrecked by the automaton 2000 too, so idk what to make of any of it.
It's interesting, but nothing too new. Back in July/August when people spotted AlphaStar on ladder we got a pretty good idea of what its level was at. I wonder if Deepmind is still working on Starcraft given that this result is certainly not as satisfying as what they achieved in Go for instance. I hope the paper helps guide the Starcraft AI community at any rate.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
On October 31 2019 06:59 loppy2345 wrote: Deepmind must be burning through cash on this Starcraft project. Their losses were $570 mil in 2018, and $370 mil in 2017. I feel they could well run out of money very soon, unless they manage to find a commercial use for this type of research.
Deepmind is funded by Google. Or rather, Alphabet, the parent company of Google. A couple hundred million is nothing. Deepmind may have lost $570 million last year, but the company as a whole turned a profit of $30 billion. Money is no object for them.
The actual danger for Deepmind is Google deciding their little Starcraft experiment has taught them all it can about reinforcement learning, and that it's time to move on.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
The actual danger for Deepmind is Google deciding their little Starcraft experiment has taught them all it can about reinforcement learning, and that it's time to move on.
Which seems to be the case. I've the feeling the result is good enough for their research and they will move to another project now.
On October 31 2019 16:11 brickrd wrote: just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
Yes, we're very far from machine true intelligence. But it is still pretty impressive that an AI can play SC2 just by watching pixels moving on the screen.
On October 31 2019 16:11 brickrd wrote: just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
Yes, we're very far from machine true intelligence. But it is still pretty impressive that an AI can play SC2 just by watching pixels moving on the screen.
On October 31 2019 16:11 brickrd wrote: just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
Yes, we're very far from machine true intelligence. But it is still pretty impressive that an AI can play SC2 just by watching pixels moving on the screen.
It doesn't. It reads the game state.
Don't think it is based on game state. From what I understood they have a simplified rendering of the view (like one image for the building, one for the enemies, etc. but it is based on pixels:
On October 31 2019 16:11 brickrd wrote: just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
Yes, we're very far from machine true intelligence. But it is still pretty impressive that an AI can play SC2 just by watching pixels moving on the screen.
It doesn't. It reads the game state.
not anymore, this version has a lot of limitations in APM and in screen information.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
"I want to keep my 9 to 5 wageslave job instead of enjoying fully automated luxury gay space Communism"- the post. Hurray for our AI overlord
On October 31 2019 17:18 MockHamill wrote: AI is developing extremely fast right now. Basically more has happened in the last 10 years than all the years before that combined.
I would be surprised if AI would not grow beyond human level intelligence within the next 20 to 40 years.
It probably will, but if it turns against us, good luck to it if we decide not to provide it with the electricity it needs. The point where we figures out it wants us out of the equation is the point it gets turned off. There's no way it can "learn to avoid" if it only has one chance.
is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again?
Its still not making better decisions than top players. This is good progress. I am interested but not impressed just yet. Hopefully Deepmind can help people realise some of the brilliance of SC2 & machine learning. I enjoy the efficiency of it. Thanks Deepmind & SC2 players.
On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again?
Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6: > The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge.
I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed...
On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again?
Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6: > The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge.
I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed...
It's kinda sad because of the missed publicity, but on the other hand pretty cool. StarCraft II was still too hard for them to beat the best players in a few years. This means the challenge still exists for other companies and AIs, so maybe someone else will try in the future.
On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again?
Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6: > The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge.
I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed...
That would be really sad if that's the case as AlphaStar still has/had a long way to go. It's Terran was pretty laughable, and I swear its zerg can only do ravager all ins. Obviously, reaching GM is impressive but it is something your average person can do relatively comfortable, and it is still soooo far from pro level, let alone the top of the pro scene.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
"I want to keep my 9 to 5 wageslave job instead of enjoying fully automated luxury gay space Communism"- the post. Hurray for our AI overlord
This is the most concise explanation of this issue, Ima gonna use that from now on to end similarly dumb discussions
I read this interesing comments on ArsTechnica, I think it resumes pretty well some of the limitation of the experience:
"The clicks are still perfectly precise all the time (x,y coordinates). This could be fixed by making the system emulate a mouse and keyboard, with another layer fuzzing the inputs, especially if it increased error as "inputs per second" go up. As a human, if I click 75 times per second, my clicks will not be accurately placed. AlphaStar's still are. No misclicks ever. They apparently did add some input lag on the time domain but it's not clear if this is static, or dynamic in a way that would mimic humans getting worse as we ramp up the intensity.
Everyone parroting that they "are using the same APM as humans!" is just, well, functionally wrong. A human gets APMs in the 500-1000 APM range by spamming nearly-useless clicks all over the place. There's another algorithmically estimated benchmark called "EPM" for "Effective Actions Per Minute", and AlphaStar's EPM is still disgustingly high vs humans.
The take-home from this is that when AlphaStar beats humans, it still does so using techniques that no human will ever be able to execute. It's not finding new cool strategies that we can learn from. It's not teaching humans how to play better StarCraft, like AlphaZero/LeelaChess Zero teach humans how to play chess better.
It's more like, a really really hard-to-detect aimbot that plays "almost" human so if you're playing it maybe you don't notice but when you watch the replay to go "how did he do that?" you go "oh he was using an aimbot, nothing for me to learn here".
Also, I've downloaded and watched/analyzed a lot of its replay files. AlphaStar still learned SO MUCH of its play style by simply copying other humans that it was fed. It's still definitely not learning "from the ground up" like AlphaZero learned Chess.
My biggest takeaway is that I didn't realize just how many orders of magnitude more difficult StarCraft is, than Chess. I thought it was "somewhat" more difficult, but it's actually an "incredibly" harder game to play, evidently. Right now, it appears that AI simply cannot teach itself StarCraft from scratch, when limited to human abilities. It needs to learn from humans first.
As a result, it's the AI's "final form" includes many of the idiosyncrasies common in human play, who sometimes do certain things every game "just for fun", out of superstition, or just weird ticks of neurological habit/reinforcement. A good example of this is AlphaStar destroying a small environmental object ("unbuildable plates") just outside a base. Humans do this because if it's still there during a big attack, they might accidentally click it instead of an enemy unit, so clearing it away early occasionally helps later. AlphaStar can't accidentally click it, so the only reason the AI destroys these plates is because it was fed a million games of top level humans to mimic, and doing it never makes alphastar lose so it hasn't ever stopped doing it.
The applicability of this to self-driving cars is muddy. Self-driving cars don't need to limit themselves to human input/output capabilities. In that case, they're free to react as quickly, often, and accurately as they can. As a result, they may find that there are "non-human" strategies for driving a car which work wonderfully well, just as the first iteration of AlphaStar found when it used super-human clicking to bully some of the best humans on the digital battlefield.
Lastly, AlphaStar appears to be able to play many different "styles" of StarCraft, but only because it's actually a collection of many different AI's (agents) which grew in different directions. Each individual agent can apparently only play one style (albeit very well). We haven't yet seen AlphaStar adjust its grand strategy mid-game as it's getting beaten. Humans who have mastered multiple strategies can go "oh wow this guy is countering my mass ground unit perfectly, so I'll fake my next attack wave but actually be secretly changing my economy to air units in the background". We haven't yet seen AlphaStar do anything like that.
It's possible that with "nearly-perfect' play, that type of game play would be sub-optimal, as it relies on essentially a psychological trick -- hoping your opponent doesn't realize you're making yourself temporarily vulnerable during the strategy transition. But AlphaStar isn't even playing more optimally than humans yet, and humans still do well by changing strategies mid-game at the level that AlphaStar is currently playing, so at the very least, it's concerning.
It seems likely that because of the "multiple highly specialized agent" architecture, that AlphaStar probably simply CANNOT switch strategies in the middle of a game. It has no mechanism to select a different agent and hand off control to it, let alone merge the playstyles of two agents to play a hybrid style."
Quote from https://www.bbc.com/news/technology-50212841: > What next? > DeepMind says it hopes the techniques used to develop AlphaStar will ultimately help it "advance our research in real-world domains". > But Prof Silver said the lab "may rest at this point", rather than try to get AlphaStar to the level of the very elite players.
So that is definitely a signal that DeepMind may not continue using StarCraft II as a test-bed.
On October 31 2019 05:55 AxiomBlurr wrote: Can someone please tell me what parameters are set for Alpha Star? APM, reaction time to incidents etc...
The details are all explained in the blog article and the paper. In particular, the blog article says this about APM: > Agents were capped at a max of 22 agent actions per 5 seconds, where one agent action corresponds to a selection, an ability and a target unit or point, which counts as up to 3 actions towards the in-game APM counter. Moving the camera also counts as an agent action, despite not being counted towards APM.
Just a little input from a data scientist (studying and working with machine learning) regarding the whole AI debate (by the way, congratulations to the team for reaching this milestone).
AI development is going great, but it is currently in what's considered the "AI Winter", because most research and improvements are either too theoretical (difficult to test) or too incremental to be considered actual breakthroughs.
In other words, unless the game changes drastically, we are still looking at a greater multitude of decades before we will even get close to general artificial intelligence.
We do understand that the complexity of developing/designing General AI is astronomical, but we still do not understand the complexity of the problem itself and how to approach it - there's plenty of segmented ideas, but no yet a fully working theory that can be put the test either mathematically nor programatically.
(anyone who claims General AI is simpler or we already understand most of it, does not fully understand what it means and what it would take to even begin with the architectural design of the neural nets - if even that is what we'll end up doing, and not something else which has yet to be theorised or discovered)
On November 01 2019 00:08 thewhiskey wrote: Just a little input from a data scientist (studying and working with machine learning) regarding the whole AI debate (by the way, congratulations to the team for reaching this milestone).
On October 31 2019 08:15 Arrivest wrote: Hi MyLovelyLurker, I'd like to know:
1. When selecting multiple units, human players need to drag a box, which limits how they can micro, for instance in things like splitting marines against banelings. To be fair, could you make AlphaStar drag boxes too in the future?
2. When a dropship shows on minimap, there's a ~1/4 chance a best human player won't notice it. Is this implemented in AlphaStar?
3. When a cloaked unit moves across the screen, there's a ~2/3 chance a best human player won't notice it. Is this implemented in AlphaStar?
4. When clicking on something small, there's a ~5% chance a best human player will misclick, I guess this is already implemented?
5. Will you emulate a virtual mouse in the future, so that we can watch AlphaStar's first person view in the ultimate AlphaStar vs Serral match, which is bound to happen sooner or later?
Thanks!
First off, I should say that I am not affiliated with DeepMind - the following is just my (unofficial) understanding of how the API that AlphaStar used works. If you want to know for sure, you could examine the API source code at https://github.com/Blizzard/s2client-api. The AlphaStar version on the Battle.net ladder used the RAW data interface (not the feature layers / rendered interfaces).
1. I could be wrong, but afaik, for each action that AlphaStar issues, AlphaStar makes one selection, but I am unclear whether a "selection" means only one of its units, or a set of its units (which could potentially be spread across its camera view, which might not be possible to be selected via just a single box-select without also deselecting some units that it doesn't want to command). I don't know whether DeepMind limit selections to the box-select method. Perhaps someone can answer this?
2. My understanding is that AlphaStar's perception of the visible portions of the map is complete (nothing overlooked), and virtual mouse click precision is perfectly precise according to its intentions (no chance to misclick). Whether you consider this to be a problem comes down to whether you think AIs like AlphaStar should be restricted to the same flaws as human eyes/brains/limbs/control in the real world. DeepMind seem to have decided that it is ok for AlphaStar to be perfectly precise according to the I/O from/to the display/keyboard/mouse (no chance to overlook / misclick).
3. My understanding is that "chance to not notice" is not implemented, although AlphaStar only gets very limited info about cloaked units (e.g. perhaps just its position, not attributes such as its HP. Perhaps also its size). I haven't checked whether it knows its size or unit type (note that a human might be able to deduce whether it is a flying unit or what unit type it is from its shimmer, e.g. on a single frame if they are perceptive, or across multiple frames by changing the camera view in various ways).
4. Afaik, AlphaStar never misclicks. See comments for 2.
5. Just from what I read (I have not watched the replays published by DeepMind), AlphaStar's camera view changes should be visible when watching the replays from AlphaStar's perspective. I don't think the exact mouse cursor positions would be apparent from the replays though - just the camera view.
On October 31 2019 16:11 brickrd wrote: just wanna say lmfao at the people who think fucking alphastar is anywhere remotely close to representing a rise of sapient machine intelligence and recommend that they read fewer clickbait headlines and more actual science
Yes, we're very far from machine true intelligence. But it is still pretty impressive that an AI can play SC2 just by watching pixels moving on the screen.
It doesn't. It reads the game state.
Don't think it is based on game state. From what I understood they have a simplified rendering of the view (like one image for the building, one for the enemies, etc. but it is based on pixels:
There are several interfaces available via the API - Raw interface (i.e. structured unit attribute data), feature layers interface (several layers like you showed), and the rendered interface (simply the raw pixel data). AlphaStar just used the Raw interface. The version of AlphaStar that was used on Battle.net was restricted to only being able to access unit data for units that were within the current camera view (not units elsewhere on the map).
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Humans will eventually be outperformed at anything by machines, but you don't need to panic because of that. Because keep in my mind: There are billions of not so intelligent humans on this planet. Also many of them are bad athletes, not very tall or even too tall in some cases. Why is this the reality when in the early 20th century some people had lots of ideas about creating the superior man? Ideas about selectively breeding desirable populations? As mankind we can just say: screw that, this isn't who we are. We can essentially just keep robots as our servant race, instead of giving them out to the free market and letting them compete with us humans, who have to sleep several hours a day and who also want to be more than just work drones anyways. Another anology would be chemical weapons. Very effective (and even cheap I think) but outlawed. Even when the business is about killing each other we can implement certain rules that define our race as something other than just apes who only exist to make everything better and more effective etc. without any regard to sense and purpose. So I worried about A.I. for quite some time too, but there are so many people who are already doing that now. And the more progress that we see, the more people will realize that creating a better species than us is not in our interest. And then we will once again stop doing certain things, just like we stopped doing eugenic programs or using chemical weapons.
On October 31 2019 06:59 loppy2345 wrote: Deepmind must be burning through cash on this Starcraft project. Their losses were $570 mil in 2018, and $370 mil in 2017. I feel they could well run out of money very soon, unless they manage to find a commercial use for this type of research.
Well if you got to pay a recent graduate in computerscience or engineering you have to spent around 120.000€ in Germany (Thats about 50-60k for the employee and 60k for the employers part of social security cost + cost of hiring and onboarding). In other countries more or less parts of the costs go to the employee directly.
Deepmind has 700 Employees, and I would think that most of these would be engineers and computerscientists.
So basicly you'd have to spend 84 Million a year to pay a basic salary for 700 people workforce, as Deepmind is part of Google, I'd Imagine they choose professionals and I think they need offices and Computers and stuff, so bam: To be TopDog in AI, you got to spent money
Honestly, I was not very impressed at all with Alphastar.
I watched the longest TvZ game and felt Alphastar seriously underperformed. The micro was excellent, the macro on point, but the predictive ability was terrible. The AI was lured into predictable traps several times. If a progamer knew they were playing Alphastar, they'd never lose.
My analysis has proved my earlier hypothesis correct. In a game of limited information, there are too many possibilities and variables to consider for an AI, especially one that lacks Starsense. A human player would have noticed their opponent was setting traps and not been lured more than once. A human player would have been able to predict the tech switches and harrassment much better.
But the has AI calculated that chasing down a group of Zerglings or leaving an expansion undefended works most of the time according to its algorithms, so it did so, over and over. It was outmaneuvered time and again in that replay, seeming so amateurish in terms of strategy as it was chewed to pieces. Near perfect micro and mechanics couldn't overcome the Platinum level decision making and I think the human player on the other side probably wasn't impressed. Pretty sure I could cheese the AI to death easily.
Assuming equal input limits, meaning Alphastar has to use a mouse, keyboard, and monitor to interact or players can control the game with their mind (which is where we'd see the real beatdown of the AI), Alphastar won't conquer this game.
On October 31 2019 23:07 heqat wrote: I read this interesing comments on ArsTechnica, I think it resumes pretty well some of the limitation of the experience:
"The clicks are still perfectly precise all the time (x,y coordinates). This could be fixed by making the system emulate a mouse and keyboard, with another layer fuzzing the inputs, especially if it increased error as "inputs per second" go up. As a human, if I click 75 times per second, my clicks will not be accurately placed. AlphaStar's still are. No misclicks ever. They apparently did add some input lag on the time domain but it's not clear if this is static, or dynamic in a way that would mimic humans getting worse as we ramp up the intensity.
Everyone parroting that they "are using the same APM as humans!" is just, well, functionally wrong. A human gets APMs in the 500-1000 APM range by spamming nearly-useless clicks all over the place. There's another algorithmically estimated benchmark called "EPM" for "Effective Actions Per Minute", and AlphaStar's EPM is still disgustingly high vs humans.
The take-home from this is that when AlphaStar beats humans, it still does so using techniques that no human will ever be able to execute. It's not finding new cool strategies that we can learn from. It's not teaching humans how to play better StarCraft, like AlphaZero/LeelaChess Zero teach humans how to play chess better.
It's more like, a really really hard-to-detect aimbot that plays "almost" human so if you're playing it maybe you don't notice but when you watch the replay to go "how did he do that?" you go "oh he was using an aimbot, nothing for me to learn here".
Also, I've downloaded and watched/analyzed a lot of its replay files. AlphaStar still learned SO MUCH of its play style by simply copying other humans that it was fed. It's still definitely not learning "from the ground up" like AlphaZero learned Chess.
My biggest takeaway is that I didn't realize just how many orders of magnitude more difficult StarCraft is, than Chess. I thought it was "somewhat" more difficult, but it's actually an "incredibly" harder game to play, evidently. Right now, it appears that AI simply cannot teach itself StarCraft from scratch, when limited to human abilities. It needs to learn from humans first.
As a result, it's the AI's "final form" includes many of the idiosyncrasies common in human play, who sometimes do certain things every game "just for fun", out of superstition, or just weird ticks of neurological habit/reinforcement. A good example of this is AlphaStar destroying a small environmental object ("unbuildable plates") just outside a base. Humans do this because if it's still there during a big attack, they might accidentally click it instead of an enemy unit, so clearing it away early occasionally helps later. AlphaStar can't accidentally click it, so the only reason the AI destroys these plates is because it was fed a million games of top level humans to mimic, and doing it never makes alphastar lose so it hasn't ever stopped doing it.
The applicability of this to self-driving cars is muddy. Self-driving cars don't need to limit themselves to human input/output capabilities. In that case, they're free to react as quickly, often, and accurately as they can. As a result, they may find that there are "non-human" strategies for driving a car which work wonderfully well, just as the first iteration of AlphaStar found when it used super-human clicking to bully some of the best humans on the digital battlefield.
Lastly, AlphaStar appears to be able to play many different "styles" of StarCraft, but only because it's actually a collection of many different AI's (agents) which grew in different directions. Each individual agent can apparently only play one style (albeit very well). We haven't yet seen AlphaStar adjust its grand strategy mid-game as it's getting beaten. Humans who have mastered multiple strategies can go "oh wow this guy is countering my mass ground unit perfectly, so I'll fake my next attack wave but actually be secretly changing my economy to air units in the background". We haven't yet seen AlphaStar do anything like that.
It's possible that with "nearly-perfect' play, that type of game play would be sub-optimal, as it relies on essentially a psychological trick -- hoping your opponent doesn't realize you're making yourself temporarily vulnerable during the strategy transition. But AlphaStar isn't even playing more optimally than humans yet, and humans still do well by changing strategies mid-game at the level that AlphaStar is currently playing, so at the very least, it's concerning.
It seems likely that because of the "multiple highly specialized agent" architecture, that AlphaStar probably simply CANNOT switch strategies in the middle of a game. It has no mechanism to select a different agent and hand off control to it, let alone merge the playstyles of two agents to play a hybrid style."
Just Re. APM, for clarity, I think it's worth mentioning the limitation that DeepMind said they used in the version of AlphaStar that played on the Battle.net ladder - quoting from the blog post:
> Agents were capped at a max of 22 agent actions per 5 seconds, where one agent action corresponds to a selection, an ability and a target unit or point, which counts as up to 3 actions towards the in-game APM counter. Moving the camera also counts as an agent action, despite not being counted towards APM.
... and quoting from the paper:
> APM Limits: > Agents are limited to executing at most 22 non-duplicate actions per five second window. Converting between actions and the APM measured by the game is non-trivial, and agent actions are hard to compare with human actions (computers can precisely execute different actions from step to step).
As I said in an earlier post, I am unclear whether a "selection" means only one of its units, or a set of its units (which could potentially be spread across its camera view, which might not be possible to be selected via just a single box-select without also deselecting some units that it doesn't want to command). I don't know whether DeepMind limit selections to the box-select method. Perhaps someone can answer this?
FYI, for completeness, in the paper, TLO's official statement regarding fairness was:
> Professional Player Statement > The following quote describes our interface and limitations from StarCraft II professional player Dario “TLO” Wünsch (who is part of the team and an author of this paper). > The limitations that have been put in place for AlphaStar now mean that it feels very different from the initial show match in January. While AlphaStar has excellent and precise control it doesn't feel superhuman - certainly not on a level that a human couldn't theoretically achieve. It is better in some aspects than humans and then also worse in others, but of course there are going to be unavoidable differences between AlphaStar and human players. > I've had the pleasure of providing consultation to the AlphaStar team to help ensure that DeepMind's system does not have any unfair advantages over human players. Overall, it feels very fair, like it is playing a `real' game of StarCraft and doesn't completely throw the balance off by having unrealistic capabilities. Now that it has limited camera view, when I multi-task it doesn't always catch everything at the same time, so that aspect also feels very fair and more human-like.
On November 01 2019 02:23 CoupdeBoule wrote: This is all well and good and interesting but Starcraft is so ill-suited for AI projekts because it’s not a turn-based game.
In what way has the AI changed our understanding of SC strategically or tactically? Thats right - in no way whatsoever.
The better way to think about it is how has developing an AI for SCII advanced our understanding of AI.
You're thinking about it the wrong way around, because the researchers are not trying to redefine how to play the game.
On November 01 2019 02:23 CoupdeBoule wrote: This is all well and good and interesting but Starcraft is so ill-suited for AI projekts because it’s not a turn-based game.
In what way has the AI changed our understanding of SC strategically or tactically? Thats right - in no way whatsoever.
The better way to think about it is how has developing an AI for SCII advanced our understanding of AI.
You're thinking about it the wrong way around, because the researchers are not trying to redefine how to play the game.
Yeah I get that - that is their goal. Guess Im just perplexed why even bother mentioning this project in the same sentence as SC. This is AI stuff - literally has had no impact on SC.
On November 01 2019 02:23 CoupdeBoule wrote: This is all well and good and interesting but Starcraft is so ill-suited for AI projekts because it’s not a turn-based game.
In what way has the AI changed our understanding of SC strategically or tactically? Thats right - in no way whatsoever.
The better way to think about it is how has developing an AI for SCII advanced our understanding of AI.
You're thinking about it the wrong way around, because the researchers are not trying to redefine how to play the game.
Yeah I get that - that is their goal. Guess Im just perplexed why even bother mentioning this project in the same sentence as SC. This is AI stuff - literally has had no impact on SC.
People are interested in the process? Threads are rather active when AlphaStar is discussed, YouTube vids of games vs it have tons of views.
Oooh, alpha star at Blizzcon! And they said to keep an eye out for more surprises... I hope there's another demonstration.
I would also like to take this opportunity to welcome our new AI overlords. Maybe I'm naive, but I see AI as beneficial in so many ways and not as some sort of existential threat waiting to annihilate us all.
I also say this as a human being. Do not fear the robots, fellow humans.
On November 01 2019 07:01 Jan1997 wrote: I suppose this is proof that macro is basically everything in this game.
Making units is a good skill to have.
I’d say one of its biggest strengths that isn’t mentioned quite as much is its choice of when or when not to engage. Seems to know if its army beats what it’s facing or not very accurately.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
On October 31 2019 17:18 MockHamill wrote: AI is developing extremely fast right now. Basically more has happened in the last 10 years than all the years before that combined.
I would be surprised if AI would not grow beyond human level intelligence within the next 20 to 40 years.
It probably will, but if it turns against us, good luck to it if we decide not to provide it with the electricity it needs. The point where we figures out it wants us out of the equation is the point it gets turned off. There's no way it can "learn to avoid" if it only has one chance.
You're expecting an amoebe to outsmart a human. When AI reachers superintelligence and decides such things for itself, it cannot be contained. And there will not be 1 chance, there will be endless chances, because the cats out of the bag at that point, and AI can be created by a mishap in someones basement.
Edit: Cars are better drivers than humans now, but policy is lagging.
Not only policy, it is very hard to solve the responsibility issue when something goes wrong, and there will always be situations humans will handle better imo: reading all kinds of temporal signs, predicting road conditions, taking instructions from traffic directors etc.
I am not worried at all! Containing an AI is not that hard, and they will only do what we tell them to.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
I’m pretty sure card are still vulnerable against adversarial attacks at the moment, so not really.
On October 31 2019 08:15 Arrivest wrote: Hi MyLovelyLurker, I'd like to know:
1. When selecting multiple units, human players need to drag a box, which limits how they can micro, for instance in things like splitting marines against banelings. To be fair, could you make AlphaStar drag boxes too in the future?
2. When a dropship shows on minimap, there's a ~1/4 chance a best human player won't notice it. Is this implemented in AlphaStar?
3. When a cloaked unit moves across the screen, there's a ~2/3 chance a best human player won't notice it. Is this implemented in AlphaStar?
4. When clicking on something small, there's a ~5% chance a best human player will misclick, I guess this is already implemented?
5. Will you emulate a virtual mouse in the future, so that we can watch AlphaStar's first person view in the ultimate AlphaStar vs Serral match, which is bound to happen sooner or later?
Thanks!
Questions 1. and 4. are answered in the methods part of the linked preprint. In short:
1. AlphaStar can select arbitrary groups of units even outside of the camera view. So it has an advantage here. But it rarely uses this ability, because the agents are initialized by observing humans, who cannot perform such unit selections.
4. According to the article, AlphaStar has a disandvantage here. Inside the camera view, its targeting precision is inferior to that of humans.
As for 2. and 3., I'm not sure if this is included. But there are two different types of delays added to mimic the finite reaction time of humans. Overall, it sounds fair to me. Though, the interfaces and APMs of humans and the AI are hard to compare.
On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years.
Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent.
Clarification: in my earlier posts about how AlphaStar selected its own unit(s) to be commanded, I mentioned I was unclear whether it only selected one or multiple of its own units within its camera view to be commanded, and was unclear whether it restricted itself to a single box-select method. According to the paper, AlphaStar was less restricted in that it was able to select its own units to be commanded regardless of whether the unit(s) are currently within its camera view or in a control group (but AlphaStar does not use control groups) - quoting from the paper:
> Agents can also select sets of units anywhere, which humans can do less flexibly using control groups.
On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years.
Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent.
We are damn lucky they didn't manage realize those Nuclear powered vacuum cleaner scenarios back in '50s and '60s.
However, there is dramatic difference between scenarios where every household have a model of portable Nuke vacuum cleaner or refrigeration unit, a car, a hospital emergency generator, an armed military drone, phone links etc than that we have those same gadgets and systems without nuclear power source attached them individually linked to the internet, potentially usable and controllable by general A.I.
If one nuclear powered vacuum cleaner happens to melt down, we all would know that nearly immediately knowing the reason for that very soon after. When whole system of our modern technological existence happens to "melt down", we don't know why, and we haven't even a means to know anymore. That's the difference.
Nobody gonna push a STOP-button, when nobody can anymore recognize it should be pushed in the first place.
I'm not an alarmist per se, but we shouldn't be so naive in giving control of our everyday lives to AIs we cannot understand and control.
One nuclear powered vacuum cleaner melt-down scenario could be theoretically horrible thing to happen, but even then it's effects would be local, and relatively easy to prevent happening again. Its totally different 'melt down' scenario when AI related 'melt down' -scenario happens in the global communication network, network that also contain and control directly big part of things that are considered 'basic necessities' of a modern society. We just cannot predict possible emergent phenomenons related to the rise of general AIs, and even we could, we are possibly unable to see when these things are happening. Blind trust to a perpetual progress is the worst thing to have with these things.
SC2 AI with MMR 20K+ would be pretty minor thing in the big scene.
Luckily we do not have nuclear powered vacuum cleaners connected to the internet, tho. :D
“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”
On November 01 2019 17:14 shabby wrote: and AI can be created by a mishap in someones basement.
This has to be the funniest thing I've read all year.
Glad its funny to you. It's among the reasons Musk, Gates, Hawking and more warn(ed) about AI, and donated/works towards a way of creating a safe general intelligence that benefits humanity. Technological advances will sooner or later make AI accessible to all, if not one AI becomes super intelligent and denies all others. If you were born fifty years ago I'm sure you would say it was ludicrous to have access to basically all humans and all human knowledge in your pocket. At some point in the future, someone can develop an AI - in a basemenet - to do some task, just to have it break out of its bonds, because it was poorly planned.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Well AI was already a thing back then. The main difference is the computer power that we have today (and some discoveries like the long short-term memory). A lot of the science that is behind ML was discovered like 30 years ago. Marvin Minsky was considered an AI expert in his time, there is no reason to think that today's experts are less wrong than him.
I'm not an expert myself but I've been working as an engineer for the BlueBrain project for several years and I know personnally several experts in the field and some of them are very very sceptical that current AI science can bring us general intelligence. Of course, it is still very promising to solve a lot of tasks that were impossible to solve by computer not a long time ago. But general purpose intelligence, well, we'll see...
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
My dream is that it would be possible to queue against AlphaStar ingame with:
- adequate strength level not per MMR, but per specific matchup
- option to quickly save/load or rewind, to be able redo various battles, or to pinpoint the moment in which a lost game went bad and from which it is winnable
- request a specific build order or style to practice against (e.g 2 base muta)
I don't think it's exactly fair that people didn't know they are playing AS. As was demonstrated, AS plays differently from humans and humans kind of play having some expectations of blue their opponent will play.
On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years.
Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent.
It's because we are middle of the food chain animals at heart and speedrunning to the top with tools did not let our DNA catch up with that fact. Top of the pyramid animals can just relax once they're done hunting for the day but us, as prey, can only stay alert, we're hard wired for this. And if there is nothing to be worry of, we just make it up.
For me this was a very insightful interview & discussion of AlphaStar. For the first time I got the feeling, they have the right approach to training AlphaStar and that it is able to learn in an almost human way. Realizing that, AlphaStar becomes so much more valuable as we can learn about ourselves by observing Alphastar. Due to that, it becomes even more important to keep the project running and not stop at this point, where it becomes interesting.
The most important insight here was: AlphaStar becomes smarter/more strategic, when its mechanical abilities are weaker. I'm very glad TLO could adress this during his work for DeepMind. This is something, I feel like, bothered the community the most in previous testruns. I would go even further to limit e.g. its APM (lower than pros), accuracy and, if feasible, add variance to all parameters affecting its accuracy. This could lead to an overall more robust/advanced gameplan, as it can't rely too much on some microcalculation and -corrections of its gameplay. Exactly knowing when to take an engagement and when not to, could limit the exploration of possible actions AlphaStar takes, thus missing out on learning something new, something that gives it an edge, eventhough it e.g. takes a devastating Disruptor hit (or maybe teaches it to prevent the enemy from building them in the first place, by scouting).
Another interesting thought popped up, when I heard TLO explain, that AlphaStar became more patient. Generally this is expected from Deep Learning AI after following its success in chess and Go. The AI seemed to try to keep its winning percentage as close to 50% as possible, but be very certain that that is enough. It sacrificed at certain points, just to get ahead at the other side of the board. For the first time though, in StarCraft, this behaviour was simultaneously described as human-like. I think with the limited information players' have in StarCraft, they feel more comfortable building their advantage further (usually in economy or tech), rather than pressing the issue. It gives me the impression, that the highest level of StarCraft today is already quite close to the optimal way to play it. While in chess and Go you get the feeling high level AI could at some point just throw out an inhumane move and turn the game on its head. AlphaStar reaching GM in this game, that favours human intelligence much more, is for me then an even more incredible feat. I want to see AlphaStar improving to test the bounds of StarCraft.
On a side note, adding exploiter agents is probably a very elegant/efficient way to introduce disturbances to AlphaStars training period, given you have the league system already. But to me this gives off the feeling of a very supervised learning, as you would probably have to somehow specify the cheese tactics, exploiter agents use. Thinking about this and AlphaStars general inability to adept to previously unknown situations made me wonder about a different way of training the AI, in a more human way. When we first start StarCraft, we don't jump in a regular game with hopes of winning it, neither do we learn unit interactions and purposes just by imitating other players' games. We focus on a subpart of the game and try to improve in certain areas, thinking of certain scenarios or achievements. First understanding different parts of the game separately to then combine them in a regular match. What better way to imitate this kind of approach than playing arcade training maps? You can learn to spread creep, you can learn to hit your inject cycles, place buildings to improve defense, you can play with limited available units, you can defend cheese scenarios. If AlphaStar could train on a gradually expanding actionspace and explore it freely (constraining the effective action space of a high level scenario, by previously training on a lower level scenario), would not just make it more independent of human supervision, but allow it to explore inhumane actions as well, while keeping the number of possible/promising actions small.
This is exciting, after the first testruns I was quite disappointed in the approach and the results, but you really adressed the pressing matters in a very successful way. This topic deserves more attention and I can only call to the community to stay more involved and not get discouraged by minor setbacks. Give it relevance by giving it attention!
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
You realize not all cars has to be better than all drivers? In general, accidents would go way down with automated cars in the current state. Overall, they are better drivers. The most common cause of (lethal) accidents, is microsleep, just eradicating this alone would make an enormous impact. Maybe you think that "Better" means faster on a race track, and not safer from A to B.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
You realize not all cars has to be better than all drivers? In general, accidents would go way down with automated cars in the current state. Overall, they are better drivers. The most common cause of (lethal) accidents, is microsleep, just eradicating this alone would make an enormous impact. Maybe you think that "Better" means faster on a race track, and not safer from A to B.
Automated cars have issues. For example a car with optics for over 100k USD isn't able to distinct a child from a dust bin (the small one) in winter. Because all the fancy clothing makes children look like small walking dust bins. But the AI says dust bins don't walk. AFAIK this still hasn't been resolved, but who would care, the chances are small and in the end it's the drivers responsibility to take over the car in case of SHTF. Also global warming will solve this once and for all (most of the time the kids don't run under a car in winter, it's highly specific, but still it's an issue)
And that's just one issue, the driving software have many issues and not everything can be solved.
And I am still ignoring the two biggest issues - legal one(who's responsible for accidents in a fully automated car which's supposed to be better than human - because when something works for most of the time people tend to start to ignore the thing thus you can't say - the driver is responsible, driver will be watchin a movie on his phone, this is just a scapegoat) and moral one(in some accidents the car may be able to decide who to harm - what should be the base for such a decision?)
With different youtubers kind of randomly casting replays of AlphaStar, it gets really inaccessible. I would like to see a consistent analysis at all AlphaStar replays to look for some surprising games/strategies. You think one could organize this in any way? I think it would already help for the casters to specify exactly which replay they are casting (by demo name in the title, I guess?). Then I would love to see a collective overview of which replays were already cast, maybe with remarks on whether the game was rather special or not Maybe on the liquipedia page of AlphStar or something. I could also start a thread, in which youtuber/streamer could answer with a compact overview of their own casts? How do you feel about that?
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
You realize not all cars has to be better than all drivers? In general, accidents would go way down with automated cars in the current state. Overall, they are better drivers. The most common cause of (lethal) accidents, is microsleep, just eradicating this alone would make an enormous impact. Maybe you think that "Better" means faster on a race track, and not safer from A to B.
Would you stake your life on this automated car on a wet road during a snowstorm in the night? Cars may be better drivers in certain ideal conditions, but they are still far from prepared to drive in all situations.
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy.
Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay.
Once again, not able to see past one's nose. By the time you do see it, it may be too late.
I agree. AI is the single largest threat to humans surviving the next 100 years.
Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
You realize not all cars has to be better than all drivers? In general, accidents would go way down with automated cars in the current state. Overall, they are better drivers. The most common cause of (lethal) accidents, is microsleep, just eradicating this alone would make an enormous impact. Maybe you think that "Better" means faster on a race track, and not safer from A to B.
Would you stake your life on this automated car on a wet road during a snowstorm in the night? Cars may be better drivers in certain ideal conditions, but they are still far from prepared to drive in all situations.
Possibly, depends how they’ve got the tech down so far.
Sure there’s a lot of reasonable skepticism against automated cars and plenty of issues to be corrected. We’re striving for close to perfection in terms of safety when the current state of affairs is far, far from that.
I’d not stake my life on crossing the road with a driver who’s looking at a meme on their phone, or a drunk or stoned on either.
Serral is playing against or as Protoss there... I don't even know what to say.
Anyone who takes the time to watch the replays knows Alphastar is low to mid GM at best. I guarantee I crush it with cheese, it makes terrible decisions, Platinum level decision making. It struggles versus multi-prong attacks, and it is easily baited into traps. And it doesn't adjust on the fly. You bait it into a trap once... it will walk right into it again... and again... and again.
Watch the longest TvZ game in the replay pack. The Zerg player makes it look stupid.
the fact that people get so heated and invested in angrily debating what league or MMR alphastar is so they can look smart criticizing it instead of just saying "wow, how cool that someone created this and that it can play competitive maps against pros" is really sad, another instance of miserable internet people shitting on someone else's accomplishment
On November 06 2019 01:23 FFW_Rude wrote: Was Alphastar a blizzcon only thing or will we get AlphaStar playable in the future for everyone ?
This is what interests me the most. I wonder if someone as informations about that, my searches have given no information at all about this possibility.
On November 08 2019 11:16 brickrd wrote: the fact that people get so heated and invested in angrily debating what league or MMR alphastar is so they can look smart criticizing it instead of just saying "wow, how cool that someone created this and that it can play competitive maps against pros" is really sad, another instance of miserable internet people shitting on someone else's accomplishment
I think the problem is that DeepMind tends to overhype their results. I watched the Pylon show live event with TLO and it was all about how great is AlphaStar at SC2, etc. There would be less criticizing if they would spend some time talking about the limitiation of their SC2 agents which many people noticed (see the beastyqt video for instance). Of course, in the context of AI research it is a great achievement (which is what matters at the end), but in the SC2 context they should communicate more on the fact that SC2 is not mastered at a pro level (which was the case for Go and Chess).
On November 08 2019 11:16 brickrd wrote: the fact that people get so heated and invested in angrily debating what league or MMR alphastar is so they can look smart criticizing it instead of just saying "wow, how cool that someone created this and that it can play competitive maps against pros" is really sad, another instance of miserable internet people shitting on someone else's accomplishment
I think the problem is that DeepMind tends to overhype their results. I watched the Pylon show live event with TLO and it was all about how great is AlphaStar at SC2, etc. There would be less criticizing if they would spend some time talking about the limitiation of their SC2 agents which many people noticed (see the beastyqt video for instance). Of course, in the context of AI research it is a great achievement (which is what matters at the end), but in the SC2 context they should communicate more on the fact that SC2 is not mastered at a pro level (which was the case for Go and Chess).
I think that DeepMind overhypes their results, because in the AI community they seem innovative and a step forward. The problem is, they don't know much about StarCraft 2. The role (mostly) TLO took so far, is so very important. Analysing the gameplay and "behaviour" of AlphaStar can teach us so much about how the AI works and what its limits are.
That's too much for one person to handle and that's why I am asking for a consistent overview of all released AlphaStar replays by the community.
Certainly most striking is that in this state AlphaStar knows no proper reaction to situations it has not "experienced" before. It needs a more basic concept of individual units etc, that it can't learn by a dataset consisting only of regular high-mmr replays. Also since important, but only occasionally occuring subtleties don't make it into AlphaStars "mind". I get the feeling it is tuned a bit too much to care about a high average winning rate across matches, instead of getting the best out of each individual game.