|
On October 03 2012 16:48 xmungam wrote: When "all" of humanity units itself... when "all" of us become one... when we all see the light... when 100% of people are happy. when we all understand everything + life + more.
This is not really correct, nor is the OP which says the singularity is when artificial intelligence surpasses our own. What it really is, is the same as a singularity in space: A point where our logic breaks down and we can't understand what's beyond it. The idea is however best exemplified by artificial intelligence... if we make an AI which can make itself smarter, and it does so exponentionally, there will eventually be a point where it's so smart that it's beyond our comprehension, and it will STILL develop exponentially, so we will never be able to catch up.
It doesn't have to be AI though. You could make the case that the internet is a form of singularity, people without the concept can't really understand all that is possible when it's there, it changes everything.
The idea about "utopia" and everything being awesome after a singularity event is just an assumption based on the idea that IF an AI would reach the point where we can't understand how it develops anymore and it's still developing exponentially, it will be able to do things for us we can't do ourselves and it could solve our problems in ways we can't possibly imagine.
That, or it could become skynet and kill us all.
|
It's a nice thought, but then, people thought the year 2012 we'd have rocket boots and robot butlers. Computers are the shit right now so when people picture the future, they just extrapolate current trends. In reality what happens is computer development slows down due to inherent limitations while some unexplored field suddenly bursts with development, enriching life in very basic but practical ways. My money is on genetics and biotech. People are already "connected" to an inane degree, it's time to focus on smart bacteria and genetic enhancement.
|
In general, the next step in technological development involves ways to efficiently turn information into things.
|
4713 Posts
I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.
The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats. But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.
In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology.
|
problem with technology is that someone made it and someone can modify it so they can allways put backdoors into it
|
On October 03 2012 18:44 Destructicon wrote: I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.
The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats. But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.
In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology. That's the whole point. The AI improves exponentially, when it reaches limitations, it creates new technologies to improve it self. It evolves, but it evolves faster than we, exponentially, so the gap becomes bigger and bigger. The idea of the singularity isn't based around infinity or lack of limitations, it's based around our level of understanding, evolution and progress, and the idea that something which progresses faster will eventually be beyond our understanding.
You could think of it as a god concept: Every "entity" beyond the singularity is like a god to us. We can't understand them, what they are doing, why or how.
|
I was recently in a open panel with some professors/ Tech industry guys and the thing that they did agree on is that 30 years is a very very optimistic estimation. Basically its not a date based on how close we are but more a date based on the rate of technological discoveries and general scientific advancement. As far as it being a good thing, the jury is still out on that one.
|
Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
|
On October 03 2012 19:02 Sanctimonius wrote: Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
Well in general scientific advancement is a good thing, but its still early to know if it is a good thing, thats why its called a singularity as no one knows what beyond that point. As far as your second fear, its not realistic the easy answer is Issac asimov's 3 laws. There are more complex reasons also, and the majority professional consensus is that its not a practical fear.
|
On October 03 2012 19:02 Sanctimonius wrote: Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
it is not a good thing at all, it would only be safe for humanity if we invested more money into research of safe AI than the actual production of AI itself. Otherwise what you are saying is the most likely possibility. By the time it happens, the hardware will be sufficient for it to make billions of copies of itself, and learn EVERYTHING over night. Given how radically dependent humanity is on machines, it will be the end of us at that point.
|
This is extremely interesting. I love technology
|
Recently, there have been a couple of prominent people raising concerns about computers taking over humans:
Steven Hawking Bill Gates Steve Wozniak Elon Musk
"There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.
Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem." - Bill Gates
|
This is a bit of a cult isn't it?
|
On March 25 2015 00:35 Roe wrote: This is a bit of a cult isn't it?
somewhat
its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity
as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons
|
this is absolutely enthralling! the future hold so much promise and potential for human growth!
|
On March 25 2015 02:19 darkscream wrote:somewhat its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons The problem with using AI to enslave people is "Why?" If you have AI smart enough to enslave people, it's also smart enough to do any job you'd need those slaves for.
I think it'll end up being like the Federation in Star Trek. Most people would be content just being pampered by the AI, while a few people wouldn't, and would go on to do great things.
|
humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.
|
On March 25 2015 04:29 l3loodraven wrote: humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.
I am also of that opinion but it isn't set in stone. Once technology is advanced enough (hundreds of years) humans no longer serve a purpose. The raw materials taken up by them could be better used in other ways if you only go from a logical standpoint.
http://en.wikipedia.org/wiki/Composition_of_the_human_body#/media/File:201_Elements_of_the_Human_Body-01.jpg
|
|
i dont think this will take 15-30 years. there are still major and complex issues regarding how to actually use the computational power, assuming that it progresses smoothly (i think it will hit the size and error checking ceiling until more breakthrough in physics/chemistry)
so i agree with others that say its more of the software problem (and even architectural), which is NOT progressing at the speed of moore's law
|
|
|
|