|
On October 31 2019 17:13 heqat wrote:Show nested quote +On October 31 2019 17:05 Nebuchad wrote:On October 31 2019 14:19 MockHamill wrote:On October 31 2019 12:59 tigon_ridge wrote:On October 31 2019 11:59 ThunderJunk wrote:On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay. Once again, not able to see past one's nose. By the time you do see it, it may be too late. I agree. AI is the single largest threat to humans surviving the next 100 years. Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen. But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out. Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood. I think people are very optimistic when saying that there will be a general intelligence in the next 100 years. In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.” Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets.
Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Edit: Cars are better drivers than humans now, but policy is lagging.
|
On October 31 2019 17:43 IMSupervisor wrote:Show nested quote +On October 31 2019 17:18 MockHamill wrote: AI is developing extremely fast right now. Basically more has happened in the last 10 years than all the years before that combined.
I would be surprised if AI would not grow beyond human level intelligence within the next 20 to 40 years. It probably will, but if it turns against us, good luck to it if we decide not to provide it with the electricity it needs. The point where we figures out it wants us out of the equation is the point it gets turned off. There's no way it can "learn to avoid" if it only has one chance.
You're expecting an amoebe to outsmart a human. When AI reachers superintelligence and decides such things for itself, it cannot be contained. And there will not be 1 chance, there will be endless chances, because the cats out of the bag at that point, and AI can be created by a mishap in someones basement.
|
Edit: Cars are better drivers than humans now, but policy is lagging.
Not only policy, it is very hard to solve the responsibility issue when something goes wrong, and there will always be situations humans will handle better imo: reading all kinds of temporal signs, predicting road conditions, taking instructions from traffic directors etc.
I am not worried at all! Containing an AI is not that hard, and they will only do what we tell them to.
|
On November 01 2019 17:14 shabby wrote: and AI can be created by a mishap in someones basement.
This has to be the funniest thing I've read all year.
|
France12758 Posts
On November 01 2019 17:10 shabby wrote:Show nested quote +On October 31 2019 17:13 heqat wrote:On October 31 2019 17:05 Nebuchad wrote:On October 31 2019 14:19 MockHamill wrote:On October 31 2019 12:59 tigon_ridge wrote:On October 31 2019 11:59 ThunderJunk wrote:On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay. Once again, not able to see past one's nose. By the time you do see it, it may be too late. I agree. AI is the single largest threat to humans surviving the next 100 years. Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen. But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out. Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood. I think people are very optimistic when saying that there will be a general intelligence in the next 100 years. In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.” Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets. Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years. Edit: Cars are better drivers than humans now, but policy is lagging. I’m pretty sure card are still vulnerable against adversarial attacks at the moment, so not really.
|
On October 31 2019 08:15 Arrivest wrote: Hi MyLovelyLurker, I'd like to know:
1. When selecting multiple units, human players need to drag a box, which limits how they can micro, for instance in things like splitting marines against banelings. To be fair, could you make AlphaStar drag boxes too in the future?
2. When a dropship shows on minimap, there's a ~1/4 chance a best human player won't notice it. Is this implemented in AlphaStar?
3. When a cloaked unit moves across the screen, there's a ~2/3 chance a best human player won't notice it. Is this implemented in AlphaStar?
4. When clicking on something small, there's a ~5% chance a best human player will misclick, I guess this is already implemented?
5. Will you emulate a virtual mouse in the future, so that we can watch AlphaStar's first person view in the ultimate AlphaStar vs Serral match, which is bound to happen sooner or later?
Thanks!
Questions 1. and 4. are answered in the methods part of the linked preprint. In short:
1. AlphaStar can select arbitrary groups of units even outside of the camera view. So it has an advantage here. But it rarely uses this ability, because the agents are initialized by observing humans, who cannot perform such unit selections.
4. According to the article, AlphaStar has a disandvantage here. Inside the camera view, its targeting precision is inferior to that of humans.
As for 2. and 3., I'm not sure if this is included. But there are two different types of delays added to mimic the finite reaction time of humans. Overall, it sounds fair to me. Though, the interfaces and APMs of humans and the AI are hard to compare.
|
On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years.
Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent.
|
Sara Connor still plays Terran, right?
|
Clarification: in my earlier posts about how AlphaStar selected its own unit(s) to be commanded, I mentioned I was unclear whether it only selected one or multiple of its own units within its camera view to be commanded, and was unclear whether it restricted itself to a single box-select method. According to the paper, AlphaStar was less restricted in that it was able to select its own units to be commanded regardless of whether the unit(s) are currently within its camera view or in a control group (but AlphaStar does not use control groups) - quoting from the paper:
> Agents can also select sets of units anywhere, which humans can do less flexibly using control groups.
|
On November 01 2019 20:20 JimmyJRaynor wrote:Show nested quote +On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years. Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent.
Black Swans can easily lurk there.
We are damn lucky they didn't manage realize those Nuclear powered vacuum cleaner scenarios back in '50s and '60s.
However, there is dramatic difference between scenarios where every household have a model of portable Nuke vacuum cleaner or refrigeration unit, a car, a hospital emergency generator, an armed military drone, phone links etc than that we have those same gadgets and systems without nuclear power source attached them individually linked to the internet, potentially usable and controllable by general A.I.
If one nuclear powered vacuum cleaner happens to melt down, we all would know that nearly immediately knowing the reason for that very soon after. When whole system of our modern technological existence happens to "melt down", we don't know why, and we haven't even a means to know anymore. That's the difference.
Nobody gonna push a STOP-button, when nobody can anymore recognize it should be pushed in the first place.
I'm not an alarmist per se, but we shouldn't be so naive in giving control of our everyday lives to AIs we cannot understand and control.
One nuclear powered vacuum cleaner melt-down scenario could be theoretically horrible thing to happen, but even then it's effects would be local, and relatively easy to prevent happening again. Its totally different 'melt down' scenario when AI related 'melt down' -scenario happens in the global communication network, network that also contain and control directly big part of things that are considered 'basic necessities' of a modern society. We just cannot predict possible emergent phenomenons related to the rise of general AIs, and even we could, we are possibly unable to see when these things are happening. Blind trust to a perpetual progress is the worst thing to have with these things.
SC2 AI with MMR 20K+ would be pretty minor thing in the big scene.
Luckily we do not have nuclear powered vacuum cleaners connected to the internet, tho. :D
“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”
|
On November 01 2019 18:36 kmh wrote:Show nested quote +On November 01 2019 17:14 shabby wrote: and AI can be created by a mishap in someones basement. This has to be the funniest thing I've read all year.
Glad its funny to you. It's among the reasons Musk, Gates, Hawking and more warn(ed) about AI, and donated/works towards a way of creating a safe general intelligence that benefits humanity. Technological advances will sooner or later make AI accessible to all, if not one AI becomes super intelligent and denies all others. If you were born fifty years ago I'm sure you would say it was ludicrous to have access to basically all humans and all human knowledge in your pocket. At some point in the future, someone can develop an AI - in a basemenet - to do some task, just to have it break out of its bonds, because it was poorly planned.
|
Italy12246 Posts
Holy crap, congratulations on the Nature paper (and GM I guess !
|
On November 01 2019 17:10 shabby wrote:Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years.
Well AI was already a thing back then. The main difference is the computer power that we have today (and some discoveries like the long short-term memory). A lot of the science that is behind ML was discovered like 30 years ago. Marvin Minsky was considered an AI expert in his time, there is no reason to think that today's experts are less wrong than him.
I'm not an expert myself but I've been working as an engineer for the BlueBrain project for several years and I know personnally several experts in the field and some of them are very very sceptical that current AI science can bring us general intelligence. Of course, it is still very promising to solve a lot of tasks that were impossible to solve by computer not a long time ago. But general purpose intelligence, well, we'll see...
|
On October 31 2019 06:15 MockHamill wrote: I have a high IQ but can not escape diamond. AlphaStar has less intelligence than a dog but is GM.
Conclusion: I must balance whine harder. Blizzard is clearly at fault.
Claiming to have high iQ is first sign of low iQ.
|
Northern Ireland23755 Posts
On November 02 2019 06:05 MeSaber wrote:Show nested quote +On October 31 2019 06:15 MockHamill wrote: I have a high IQ but can not escape diamond. AlphaStar has less intelligence than a dog but is GM.
Conclusion: I must balance whine harder. Blizzard is clearly at fault. Claiming to have high iQ is first sign of low iQ. Depends on the claim. Properly tested = fine, took some ridiculous online test = no.
MockHamill has attracted flak for a post that was inherently self-deprecating in nature to begin with .
|
On November 01 2019 17:10 shabby wrote:Show nested quote +On October 31 2019 17:13 heqat wrote:On October 31 2019 17:05 Nebuchad wrote:On October 31 2019 14:19 MockHamill wrote:On October 31 2019 12:59 tigon_ridge wrote:On October 31 2019 11:59 ThunderJunk wrote:On October 31 2019 08:16 tigon_ridge wrote: Why are you people so excited? This organization, which is a child of your big brother google, is engineering the alpha phase of your replacement. How do you people have such little foresight? Skynet isn't just fiction—it's also prophecy. Guys, they devoted 150 computers with 28 processing cores EACH to this project. Human brains cost less to operate. We'll be okay. Once again, not able to see past one's nose. By the time you do see it, it may be too late. I agree. AI is the single largest threat to humans surviving the next 100 years. Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen. But trying to control something that will be much more intelligent than us, I see no scenario where we will not be wiped out. Or maybe we could figure out something better than capitalism so that we get to be happy that improved technology removes some of our need for work instead of being afraid that it threatens our livelihood. I think people are very optimistic when saying that there will be a general intelligence in the next 100 years. In 1965, Herbert Simon said “Machines will be capable, within 20 years, of doing any work a man can do" and in 1970, Marvin Minsky said: “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being.” Did not really happen. Skynet stuff is still pretty much science fiction. Still waiting to see autonomous cars in the streets. Its very different though, AI wasnt a thing at all back then. And even though its difficult for humans to understand, the growth and advances are exponential, not linear. Check out this https://nickbostrom.com/papers/survey.pdf. Its a survey by Nick Bostrom amongst the leading experts in the field, the median expected human level intelligence is in 30 years. Edit: Cars are better drivers than humans now, but policy is lagging.
Making this general claim of cars being better drivers than humans is one of the most laughable things I've ever seen in any AI thread.
|
If you're living in london, you better get your ass down there and attend.
|
My dream is that it would be possible to queue against AlphaStar ingame with:
- adequate strength level not per MMR, but per specific matchup
- option to quickly save/load or rewind, to be able redo various battles, or to pinpoint the moment in which a lost game went bad and from which it is winnable
- request a specific build order or style to practice against (e.g 2 base muta)
|
Poland3747 Posts
I don't think it's exactly fair that people didn't know they are playing AS. As was demonstrated, AS plays differently from humans and humans kind of play having some expectations of blue their opponent will play.
|
On November 01 2019 20:20 JimmyJRaynor wrote:Show nested quote +On October 31 2019 14:19 MockHamill wrote: Climate change could wipe us out but improved technology and consumer pressure will probably solve that. Nuclear war is still a threat but is unlikely to happen.
Its hilarious that every decade has a new 'end of the world' scenario. Hollywood is special though.. they need even more threats. So they have the "California is going slide into the ocean" thing that they've been crying about for 50+ years. Humans self esteem has not caught up with their success in dealing with infectious diseases so they dream up new imaginary ways to claim the apocalypse is imminent. It's because we are middle of the food chain animals at heart and speedrunning to the top with tools did not let our DNA catch up with that fact. Top of the pyramid animals can just relax once they're done hunting for the day but us, as prey, can only stay alert, we're hard wired for this. And if there is nothing to be worry of, we just make it up.
|
|
|
|