|
On October 31 2019 14:19 MockHamill wrote:Show nested quote +On October 31 2019 12:59 tigon_ridge wrote:On October 31 2019 11:59 ThunderJunk wrote:On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay. Once again, not able to see past one's nose. By the time you do see it, it may be too late. I agree. AI is the single largest threat to humans surviving the next 100 years. Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen. But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out.
Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
|
On October 31 2019 17:05 Nebuchad wrote:Show nested quote +On October 31 2019 14:19 MockHamill wrote:On October 31 2019 12:59 tigon_ridge wrote:On October 31 2019 11:59 ThunderJunk wrote:On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay. Once again, not able to see past one's nose. By the time you do see it, it may be too late. I agree. AI is the single largest threat to humans surviving the next 100 years. Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen. But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out. Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood.
I think people are very optimistic when saying that there will be a general intelligence in the next 100 years.
In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.”
Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
|
AI is developing extremely fast right now. Basically more has happened in the last 10 years than all the years before that combined.
I would be surprised if AI would not grow beyond human level intelligence within the next 20 to 40 years.
|
Alphastar is like SKYNET and Serral is like John Connor!
|
On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. "I want to keep my 9 to 5 wageslave job instead of enjoying fully automated luxury gay space Communism"- the post. Hurray for our AI overlord
|
On October 31 2019 17:18 MockHamill wrote: AI is developing extremely fast right now. Basically more has happened in the last 10 years than all the years before that combined.
I would be surprised if AI would not grow beyond human level intelligence within the next 20 to 40 years.
It probably will, but if it turns against us, good luck to it if we decide not to provide it with the electricity it needs. The point where we figures out it wants us out of the equation is the point it gets turned off. There's no way it can "learn to avoid" if it only has one chance.
|
is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again?
|
Its still not making better decisions than top players. This is good progress. I am interested but not impressed just yet. Hopefully Deepmind can help people realise some of the brilliance of SC2 & machine learning. I enjoy the efficiency of it. Thanks Deepmind & SC2 players.
|
On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again? Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6: > The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge.
I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed...
|
On October 31 2019 18:30 Quatari wrote:Show nested quote +On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again? Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6:> The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge. I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed... It's kinda sad because of the missed publicity, but on the other hand pretty cool. StarCraft II was still too hard for them to beat the best players in a few years. This means the challenge still exists for other companies and AIs, so maybe someone else will try in the future.
|
On October 31 2019 18:30 Quatari wrote:Show nested quote +On October 31 2019 17:56 gnuoy000 wrote: is this it? do the folks at deepmind feel like they accomplished their goal? after watching alphago beat lee sedol i kinda hoped they had greater aspirations. 6.2 mmr is maybe pro-level, but it's not the "lee sedol" of starcraft. outside of the exhibition match from almost a year ago, did alphastar ever beat mana again? Quote from the Nature article https://www.nature.com/articles/d41586-019-03298-6:> The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge. I wonder if this is a signal that DeepMind has no plans to continue working with StarCraft II as a test-bed...
That would be really sad if that's the case as AlphaStar still has/had a long way to go. It's Terran was pretty laughable, and I swear its zerg can only do ravager all ins. Obviously, reaching GM is impressive but it is something your average person can do relatively comfortable, and it is still soooo far from pro level, let alone the top of the pro scene.
|
Northern Ireland23755 Posts
It’s been pretty fascinating to see both the real-time and lack of perfect information and what a hurdle that is to overcome via machine learning etc.
In a way it also rams home how difficult balancing such a game actually is as well.
|
On October 31 2019 17:32 algue wrote:Show nested quote +On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. "I want to keep my 9 to 5 wageslave job instead of enjoying fully automated luxury gay space Communism"- the post. Hurray for our AI overlord
This is the most concise explanation of this issue, Ima gonna use that from now on to end similarly dumb discussions
|
I like it so much ! AlphaStar is a way to immortalize StarCraft as a game as epic than chess and go. That's awesome.
|
On October 31 2019 06:15 MockHamill wrote: I have a high IQ but can not escape diamond. AlphaStar has less intelligence than a dog but is GM.
Conclusion: I must balance whine harder. Blizzard is clearly at fault. Do you watch Rick & Morty?
|
I read this interesing comments on ArsTechnica, I think it resumes pretty well some of the limitation of the experience:
"The clicks are still perfectly precise all the time (x,y coordinates). This could be fixed by making the system emulate a mouse and keyboard, with another layer fuzzing the inputs, especially if it increased error as "inputs per second" go up. As a human, if I click 75 times per second, my clicks will not be accurately placed. AlphaStar's still are. No misclicks ever. They apparently did add some input lag on the time domain but it's not clear if this is static, or dynamic in a way that would mimic humans getting worse as we ramp up the intensity.
Everyone parroting that they "are using the same APM as humans!" is just, well, functionally wrong. A human gets APMs in the 500-1000 APM range by spamming nearly-useless clicks all over the place. There's another algorithmically estimated benchmark called "EPM" for "Effective Actions Per Minute", and AlphaStar's EPM is still disgustingly high vs humans.
The take-home from this is that when AlphaStar beats humans, it still does so using techniques that no human will ever be able to execute. It's not finding new cool strategies that we can learn from. It's not teaching humans how to play better StarCraft, like AlphaZero/LeelaChess Zero teach humans how to play chess better.
It's more like, a really really hard-to-detect aimbot that plays "almost" human so if you're playing it maybe you don't notice but when you watch the replay to go "how did he do that?" you go "oh he was using an aimbot, nothing for me to learn here".
Also, I've downloaded and watched/analyzed a lot of its replay files. AlphaStar still learned SO MUCH of its play style by simply copying other humans that it was fed. It's still definitely not learning "from the ground up" like AlphaZero learned Chess.
My biggest takeaway is that I didn't realize just how many orders of magnitude more difficult StarCraft is, than Chess. I thought it was "somewhat" more difficult, but it's actually an "incredibly" harder game to play, evidently. Right now, it appears that AI simply cannot teach itself StarCraft from scratch, when limited to human abilities. It needs to learn from humans first.
As a result, it's the AI's "final form" includes many of the idiosyncrasies common in human play, who sometimes do certain things every game "just for fun", out of superstition, or just weird ticks of neurological habit/reinforcement. A good example of this is AlphaStar destroying a small environmental object ("unbuildable plates") just outside a base. Humans do this because if it's still there during a big attack, they might accidentally click it instead of an enemy unit, so clearing it away early occasionally helps later. AlphaStar can't accidentally click it, so the only reason the AI destroys these plates is because it was fed a million games of top level humans to mimic, and doing it never makes alphastar lose so it hasn't ever stopped doing it.
The applicability of this to self-driving cars is muddy. Self-driving cars don't need to limit themselves to human input/output capabilities. In that case, they're free to react as quickly, often, and accurately as they can. As a result, they may find that there are "non-human" strategies for driving a car which work wonderfully well, just as the first iteration of AlphaStar found when it used super-human clicking to bully some of the best humans on the digital battlefield.
Lastly, AlphaStar appears to be able to play many different "styles" of StarCraft, but only because it's actually a collection of many different AI's (agents) which grew in different directions. Each individual agent can apparently only play one style (albeit very well). We haven't yet seen AlphaStar adjust its grand strategy mid-game as it's getting beaten. Humans who have mastered multiple strategies can go "oh wow this guy is countering my mass ground unit perfectly, so I'll fake my next attack wave but actually be secretly changing my economy to air units in the background". We haven't yet seen AlphaStar do anything like that.
It's possible that with "nearly-perfect' play, that type of game play would be sub-optimal, as it relies on essentially a psychological trick -- hoping your opponent doesn't realize you're making yourself temporarily vulnerable during the strategy transition. But AlphaStar isn't even playing more optimally than humans yet, and humans still do well by changing strategies mid-game at the level that AlphaStar is currently playing, so at the very least, it's concerning.
It seems likely that because of the "multiple highly specialized agent" architecture, that AlphaStar probably simply CANNOT switch strategies in the middle of a game. It has no mechanism to select a different agent and hand off control to it, let alone merge the playstyles of two agents to play a hybrid style."
|
Quote from https://www.bbc.com/news/technology-50212841: > What next? > DeepMind says it hopes the techniques used to develop AlphaStar will ultimately help it "advance our research in real-world domains". > But Prof Silver said the lab "may rest at this point", rather than try to get AlphaStar to the level of the very elite players.
So that is definitely a signal that DeepMind may not continue using StarCraft II as a test-bed.
|
The headline threw me before I remembered what AlphaStar is
Think Day9 beat the machine to the punch on this years ago though!
|
On October 31 2019 05:55 AxiomBlurr wrote: Can someone please tell me what parameters are set for Alpha Star? APM, reaction time to incidents etc...
The details are all explained in the blog article and the paper. In particular, the blog article says this about APM: > Agents were capped at a max of 22 agent actions per 5 seconds, where one agent action corresponds to a selection, an ability and a target unit or point, which counts as up to 3 actions towards the in-game APM counter. Moving the camera also counts as an agent action, despite not being counted towards APM.
It also details some stats about reaction times.
|
Just a little input from a data scientist (studying and working with machine learning) regarding the whole AI debate (by the way, congratulations to the team for reaching this milestone).
AI development is going great, but it is currently in what's considered the "AI Winter", because most research and improvements are either too theoretical (difficult to test) or too incremental to be considered actual breakthroughs.
In other words, unless the game changes drastically, we are still looking at a greater multitude of decades before we will even get close to general artificial intelligence.
We do understand that the complexity of developing/designing General AI is astronomical, but we still do not understand the complexity of the problem itself and how to approach it - there's plenty of segmented ideas, but no yet a fully working theory that can be put the test either mathematically nor programatically.
(anyone who claims General AI is simpler or we already understand most of it, does not fully understand what it means and what it would take to even begin with the architectural design of the neural nets - if even that is what we'll end up doing, and not something else which has yet to be theorised or discovered)
|
|
|
|