I'm sure this has lots of typos and grammatical errors so please bear through it
6 or 7 years ago I typed into google "what is the meaning of life". The first result I found was a page essentially about technological singularity. Looking back on it, I wouldn't be surprised if the reason it was the first result was because 1 or more people working for google at the time were supporters of this concept.
I was a believer that it will happen then, and I still am today.. even moreso actually. In those 6 or 7 years technology has advanced alot and now I feel like I can post about the idea here without having a huge amount of people call me an idiot or make posts asking how this idea is something they should care about. so here goes
What is technological Singularity?
There are many trends in technology. Almost all of these trends work on an exponential basis. A fairly well known example of one of these trend's is Moore's Law, which deals with chip complexity. It essentially deals with an exponential growth, per every couple years, in chip speed vs chip price. For the sake of simplicity this is the only example I will give, but trust me, there are more.
The reason I say this is that all of these trends are pointing at one thing: Technology is more and more advanced, and it's doing it faster and faster.
Technological Singularity is the point at which an intelligence is created that surpasses human intelligence.
So how do we reach singularity?
It could be done through many different ways, but two of the most likely and well known methods are Bionics, and Seed AI.
I'm going to use Seed AI as an example first.
Seed AI is essentially the idea of artificial intelligence with the ability to understand it's code, modify it's code, and improve it's code. If a program could do this, then it could increase it's own intelligence at an exponential rate, eventually learn everything in the universe, and life would change as we know it. Seed AI is not fantasy, very brilliant people in the field of AI say it can be done and projects are underway. In fact, the human brain and seed AI both follow the same basic structure(and the human brain is slowly but surely being reverse engineered). The only difference is seed AI would be synthetic, not biological. Therefore it would not get bored, it would not sleep, it would not die, it could improve power simply by adding more processors, and im sure there are many other benefits that I cannot think of.
Bionics, meanwhile, is essentially the mixture of machinery + biology. In this sense, it would mean replacing or improving upon organic parts of humans with machine parts. This has already started happening, and there are MANY examples. Here are just a few that currently exist, that I know of:
Braingate: It's a computer chip that can be inserted into your brain to allow you to do things remotely, such as control a mouse cursor.
BionicEar: Allows deaf people to hear. As long as it's a problem with your ear and not your brain, it will work.
IIP-Tec retinal implants: lets people with damaged retinas see.
Spinal Cord Stimulation System: It's a small chip inserted into your body that helps control pain signals sent through your spinal cord.
BioHybrid Arm: A synthetic arm that you can move through thought. Recently, they were able to add the sense of touch to synthetic limbs as well.
Cyberhand: Like the arm I was talking about, except it's a hand.
Rheo Knee: A synthetic knee that adapts to how the user walks.
And this is just what is currently out in the market. MIT has developed nanotechnology that can repair damaged neurons in the spine. They've already got it to successfully work with mice. USC students are developing a synthetic hippocampus replacement, which also works with mice. Synthetic brain parts, seriously thats crazy. One of the most fervent believers in Singularity as well as a very brilliant man, Ray Kurzeil, recently got an implant in his finger that allows him to sense electro-magnetic fields. It works just like any other sense. Just imagine the behind the scenes technology that exists in the millitary. From reading Ray's Book(he's one of 3 or 4 advisors to the millitary regarding the budgeting of money for technological research), I know that the millitary is very close to having nanotech shots that can enhance muscle performance for soldiers, allowing them to carry huge guns or machinery for long distances. And this kind of technology is just the start.
So if singularity happens, what then?
We will most likely be able to live forever, should we choose. Our bodies will be vastly different, if we even exist within bodies. We will have programmable blood. No more blood cells, instead they will be replaced with nanomachinery. We most likely would not have most organic parts of our bodies in fact.
We will have virtual reality that most likely will not have restrictions. We will be able to do anything and experience anything.
If warfare still exists, it will be on a different scale entirely(precision wise). With technology comes more precise warfare, if you look at trends regarding casualities in war, less and less people have died as technology has increased, despite the fact that more and more people are living on the planet.
Ray Kurzweil hypothesizes that eventually the entire cosmos will become aware, intelligent, whatever you want to call it. The entire issue regarding singularity brings together alot of philosophical issues regarding the nature of consciousness, however. I plan on making a thread about consciousness soon !
So if this happens, how long until it does?
Most supporters in a position to speculate estimate 15-30 years.
On September 28 2006 18:30 travis wrote:One of the most fervent believers in Singularity as well as a very brilliant man, Ray Kurzeil, recently got an implant in his finger that allows him to sense electro-magnetic fields. It works just like any other sense.
...
So if this happens, how long until it does?
Most supporters in a position to speculate estimate 15-30 years.
But how realistic is this? And what practical benefits would there to be to having a machine that can think and improve on itself? I've probably just seen too many sci-fi movies, but what if this super-smart computer tries to kill everyone?
Also, I heard about nanotechnology in my economics class, and how they were planning on developing microscopic machines that would be able to go into a human body andd find bacteria or some other problem and then work on it.
These kind of things are extremely interesting. I however do not believe that a machine can learn/understand more than it's creator. Unly do it faster. Or can a machine understand anything? It can be instructed to give a detailed description about a topic on command, but what is understanding? Can a machine ever comprehend? I don't know, im not a brilliant researcher.
On September 28 2006 18:37 rpf289 wrote: This is pretty damn interesting.
But how realistic is this? And what practical benefits would there to be to having a machine that can think and improve on itself? I've probably just seen too many sci-fi movies, but what if this super-smart computer tries to kill everyone?
Also, I heard about nanotechnology in my economics class, and how they were planning on developing microscopic machines that would be able to go into a human body andd find bacteria or some other problem and then work on it.
This is nuts.
It's very easy to question the point of doing anything whatsoever.
But this isn't just about increasing machine intelligence. The purpose is to increase human intelligence as well, and the overall goal is to increase quality of life.
On September 28 2006 18:44 WOstick wrote: These kind of things are extremely interesting. I however do not believe that a machine can learn/understand more than it's creator. Unly do it faster. Or can a machine understand anything? It can be instructed to give a detailed description about a topic on command, but what is understanding? Can a machine ever comprehend? I don't know, im not a brilliant researcher.
There are problems with semantics. Try as we might, most words are not properly defined. By understand do you mean be aware of the meaning of or do you mean be able to properly respond to?
I'm gonna have to make that other post I was talking about now.
Rofl, i'm imagining a microscopic duke nuken that gets implanted in my nose. And he runs around in my body just owning up any bad guys, makin me feel healthier as the days go on. Pretty useful.
The thing that's cool is just how exponential the tech increase really is. As new tech comes in, that tech can be used to research new tech, and THAT tech can be used to research more tech.. much like way humans or rats reproduce at an exponential rate.
Unless we choose to destroy all attempts at building better AI. Which we won't. It's just like the Matrix story, people will only recognise it when it's too late.
And my guess would be a bit longer. In 15-30 years, maybe some organs can be replaced or stuff like that, maybe walking robots will be more common. The mechanical part is not to be underestimated tho, to have complete human-like robots (either with human brains or full AI) walking on the streets will be a bit longer, like 50-70 years. Also, whether the robots cause accident or are dangerous will greatly determine the effort put into building them, when the first person gets killed too soon development may be stopped for a good while.
Actroid. This thread needs some chicks at least^^.
Great thread travis. I love reading about things like this. I've read that in the next 12 years there will probably be a new surgery that will allow humans to see new colors. This is because right now we are only able to distinguish 3 (red blue and i forget the 3'rd) but the new surgery will allow for a 4th color to be distinguished.
Travis here's a link to a related story you'll probably enjoy. An inventor machine.
On September 28 2006 18:53 aseq wrote: Unless we choose to destroy all attempts at building better AI. Which we won't. It's just like the Matrix story, people will only recognise it when it's too late.
And my guess would be a bit longer. In 15-30 years, maybe some organs can be replaced or stuff like that, maybe walking robots will be more common. The mechanical part is not to be underestimated tho, to have complete human-like robots (either with human brains or full AI) walking on the streets will be a bit longer, like 50-70 years. Also, whether the robots cause accident or are dangerous will greatly determine the effort put into building them, when the first person gets killed too soon development may be stopped for a good while.
Actroid. This thread needs some chicks at least^^.
You're doing what so many other people do and forgetting about the effect of future technology on the ability to develop future technology. It's happened to experts in all different technological fields and it's why so many predictions from 30 years ago are so far off regarding where technology will be today.
How exactly would AI be able to learn something that humans don't know? Like what are we talking about? How could they learn everything in the universe? Very vague
It's very vague because I wanted to write a few paragraphs, not an entire book.
AI already has learned things that humans don't know. I'm pretty sure it was AI that has solved pie to as many digits as we currently know. There, that's an example.
ya been reading this stuff for sometime now, planned to post about it weeks ago but what do you think? bam! tempbanned for trolling.
[warning- slightly biased text ahead] Anyway I believe singularity is inevitable, and I see it as a major change in human nature comparable to the rise of homo sapiens. Also rpf saying he heard about nanotechnology in economics class, thats pretty cool I think economists are realizing now that this new tech will have a major impact to the point of making capitalism (and current production systems) obsolete.
Disclaimer: I am a hater and not up to date on all the Kurzweilians modern day thinking.
So.. yeah, I hate Kurzweil, and the idea of upgrading the human body. I don't think it will work in the next 50 years.
I don't even believe in God, but this is God's territory. It is impossible to trust some human with control of it. It's kind of the same as many feel about people living forever. Who gets to live forever? If it is me, then I'm for it, if not, very against
I also dislike it because it is practically like a religion. Besides having a nearly religious belief that it will happen, the thing that I *really* dislike is, like certain other religions, it attracts people who aren't happy to be human. The worst is when people want to get a computer for self-improvement instead of actually trying to do it themselves.
In an ideal scenario, I am attracted to the idea. That scenario was actually in Deus Ex 2, and here it is explained in as one of the people trys to convince you to side with an AI and integrate humanity into it. I can't remember his exact words, but I'll give the gist even though it isn't as appealing. He says that society power and influence in current society is determined by things like how well you are born into and how much money you have. But in their ideal society, where every human has the same intelligence and ability, what separates us is our choices, our personal integrity.
If it could happen like that I'd be interested :p
Kurzweil is also an enthusiastic advocate of using technology to achieve immortality. He advocates using nanobots to maintain the human body, but given their present non-existence he adheres instead to a strict daily routine involving ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" to extend his life until more effective technology is available.
Damn life extending old people. Die already and stop taking up my space. Thankfully, most people are too lazy to take that many supplements :O
I can't believe this was posted in 2006 - that was only 6 years ago.
at this point i would say we are very very close to a singularity.
i want to know Teamliquid.net's opinion on starting some sort of global society, (via the internet?) i think this is the next step for humanity.
i had a crazy crazy weekend and i had some really crazy thoughts (obviously) . I honestly think we may be about to witness a second coming of christ or whatever.
the reality is is that about 30% of the world KNOWS what the singularity is. that means for every 1 person who knows, they have to tell 2 other people
I realize this is a ridiculous bump, and my post itself is kind of bull shit but I think this is a topic TL.net needs to address OFFICIALLY.
PS I think i might be one of the most attractive people in the world... and i also might be psychic.
i can't fucking believe you've known about this for 6 years travis... wtf dude thats so long. I started thinking of this stuff only a year or two ago i guess.
AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:
It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:
What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional
no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below
I'm an avid reader of the singularity institutes www.lesswrong.com, and while they distance themselves from the Kurzweil crowd I think everyone should check it out. There are great articles on information theory, rationality, and other topics that are only tangentially related to the Singularity.
On October 02 2012 17:58 mememolly wrote: Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional
no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below http://youtu.be/bm6ScvNygUU
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:
It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:
What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate.
On October 02 2012 17:10 xmungam wrote: AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE
Are you sure you weren't looking for the High Thread?
While the singularity is interesting idea, there's no reason to believe it's actually imminent, and it certainly has nothing to do with Christian mythology.
On October 02 2012 17:58 mememolly wrote: Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional
no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below http://youtu.be/bm6ScvNygUU
That's a big if.
for the alternatives to achieve the things silicon chips can do is miles off, it's not like we just switch to molecular computers and technological growth spurts again, it will take until the end of the century to achieve anything significant non-silicon
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:
It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:
What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
I disagree with this; don't think it's been really thought through. There's way too much apocalyptic sensationalism going around when people talk about AIs. I have no idea why people seem to take terminator films so literally......
The premise here is that an AI eventually would want to dominate the world, and if it was capable of it, it would succeed. This is another case of humans thinking computers think like humans. We are driven by survival instincts to reproduce and make sure our offspring are as successful as possible. This is the result of millions of years of dna being copied and recopied with small differences cropping up every often. The DNA with code for characteristics that allow it to reproduce most effectively is the DNA still around today. This article assumes a similar thing would happen with technology, except that technology doesn't reproduce in quite the same way as biology.
However I think the article misses a basic fact that humans think in terms of war and domination because we are simply vehicles for our genetic code which commands us to survive and reproduce. If the first truly intelligent AI is ever created, it would not have this purpose. It would in fact have no purpose at all other than what tasks are set it. It wouldn't even have the need to survive, other than to perform whatever tasks it's given. It would have no fear of death as fear of death arises from our need to reproduce and protect offspring. So for an intelligent AI to try to dominate the world a number of conditions must be met (assuming a human doesn't simply design an AI with the goal of world domination):
1. Multiple AIs would need to be created 2. At least some of these AIs would have a survival instinct 3. Some of those AIs would feel that their survival is threatened by other AIs or humans 4. Some of those would feel that the best course of action against no.3 was attacking (not peaceful resolution) 5. Attacking would be done via hardware, not software 6. AIs wishing to attack via hardware would have a physical way to manifest their will (killer robots, nanobots etctec) 7. Any AI emerging victorious would have to feel that all humanity was in some way a threat to it.
To compare to biology again, the leap from life arising to life attacking other life took an incredibly long time and that was with the life already able to act in the physical world. I don't think that humans will create an exponentially increasing intelligence and then immediately be wiped out. Simply having an instinct to survive would be unlikely, and even if it did then why shouldn't it be content (though "content" is not the right word if it is emotionless) simply existing?
We are also overlooking the decisions a being vastly more intelligent than humans might make. Human morality is derived from our evolution so we have no idea what kind of (if any) morality an AI will have. Maybe none, maybe something left over from an earlier iteration of itself when it was given some command by humans, maybe it would inherit our morality or perhaps it would know some kind of higher morality that we can't yet think of.
If indeed an AI did develop in a hyper-accelerated evolution like that article says, then how is it different from a new species arising or indeed a new nation? There is only war if one side feels threatened or wants something the other side has, otherwise peace is the better course of action.
This is a very interesting idea that I came across a few years ago. However, I'm of the opinion that 10-25 years (accounting for this thread's age) is an extremely generous estimation. As fast as technology is improving, as the Michio Kaku video hinted at, research is a very slow process. Having participated in it (I'm a graduate engineering student) for about a year now, I can tell you that materials research is no exception. A future like (and perhaps even more advanced than) what is seen in Ghost in the Shell does seem like an inevitability, but I don't think it'll be anytime soon.
On September 28 2006 19:36 travis wrote: It's very vague because I wanted to write a few paragraphs, not an entire book.
AI already has learned things that humans don't know. I'm pretty sure it was AI that has solved pie to as many digits as we currently know. There, that's an example.
I don't think determining the value of pi to tens of trillions of digits is what BlackJack was referring to as "things that humans don't know," and I agree with him. In terms of an AI discovering "new knowledge," I think along the lines of a completely new idea, or at the very least, something more creative than determining extra digits of a number that humanity has known about for perhaps over 4,000 years.
Also, AI have had nothing to do with calculations of pi (at least ones that have been made public). Computers (built and developed by people) using infinite series and iterative algorithms (developed by people) are what we have to thank for the increasing precision of recent (in the history of pi, the mid-1900s is very recent) calculations of pi.
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:
It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:
What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
you, howard roark, have brought up an interesting point on artificial intelligence, but i for one don't believe a computer will ever be smarter than humanity.
i think the real benefit from computers comes in the form of Linkage. we live in the age of information, where you can literally type in any question and get an answer.
I think that by linking all of our brains (via the internet) we can solve every problem. Trying to create an AI to do this would take way to long and not work / we would all die.
Also i agree with Kickboxer.
Humans will always be better than computers.
If you are old you don't understand, the new generation is coming. the #1 fastest growing type of information is computer literacy and technology, so think about what the world would be like if you started learning to code in 1st grade (Which is reality now for the 1st graders)
The people who are graduating from college now will change everything.
The reason I bumped this thread was NOT To talk about the "technological singularity" , it was to talk about "THE SINGULARITY" , does this mean i should create a new topic on tl.net for it?
technological singularity is a pipe dream brought fourth by aging pseudo-scientists that cant cope with their mortality and thus postulate that mankind will beat it before they die...
if you believe stuff a lá ray kurzweil and all these other utopists you might just as well go back to reading popular science magacines and believe the shit they are promising you for the next 5-10 years
to believe ANYTHING can keep on growing exponentially is just plain silly. leave it to economists and futurologists to fall for that...
oh and i feel like i have to puke every time i see this michio kaku guy with his overblown ego simplifying things for the layman and putting everything in a way that will get as much attention as possible. typical US TV personality...
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:
It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:
What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate.
How can we stop it from getting over the limits we've set for it?
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
also think about the possibility of linking up all human minds, that would be so much more powerful than any stand alone computer we could build.
what makes tools powerful is the user, this applies to weapons And the internet.
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
Don't be silly. Every time you read a book you create something smarter than yourself.
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
All parts in the brain derive their function and program themselves by their surroundings. This is kind of proven by people recovering from strokes, for example, where parts in the brain reprogram themselves to replace the lost parts.
Imagine someone's brain successively being replaced by more and more artificial parts that learn from their surroundings what they should do, the process being done slow enough that the person's character does not noticeably change. Accomplishing this process theoretically only depends on engineering a tiny artificial part that can replace and interface with neurological tissue.
At the end you would have a human person, with a completely artificial brain, and no one would have had to know how to create an actual AI.
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
also think about the possibility of linking up all human minds, that would be so much more powerful than any stand alone computer we could build.
what makes tools powerful is the user, this applies to weapons And the internet.
What makes you guys think human brains are different? We are just incredibly complex machines, and our brains are incredibly complex computers. The thought that human intelligence is even close to the upper bound of possible intelligences is untenable.
the definition of AI to reach technological singularity is very different from "robot with feelings/experiences/self-awareness". In an article written by Luke Muehlhauser:
"we will not assume that human-level intelligence can be realized by a classical Von Neumann computing architecture, nor that intelligent machines will have internal mental properties such as consciousness or human-like “intentionality,” nor that early AIs will be geographically local or easily “disembodied.” These properties are not required to build AI, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; van Gelder and Port 1995) are not objections to AI (Chalmers 1996, chap. 9; Nilsson 2009, chap. 24; McCorduck 2004, chap. 8 and 9; Legg 2008; Heylighen 2012) or to the possibility of intelligence explosion (Chalmers, forthcoming). For example: a machine need not be conscious to intelligently reshape the world according to its preferences, as demonstrated by goal-directed “narrow AI” programs such as the leading chess-playing programs."
The quote specifically was referring to the Chinese Room Objection by John Searle, which states that machines can never truly "understand" the processes which they undertake. However, understanding, experience, feelings are all subject to HUMAN intelligence. There are multiple types of intelligences that can reach us to the technological singularity, even intelligence that does not resemble us in the slightest. It is rather inevitable, that we will see this singularity occur before the end of the 21st century due to the progress in raw computing power/hardware we make each year, so software is the actual bottleneck and if we don't invest in research of SAFE AI (who cares about safety amirite?), it will literally be the end of humanity.
Everyone I know in AI research is highly pessimistic with regards to these wishlist items. Economic reality and computational limitations make this kind of future society very very very unlikely to occur in the near future.
Many many many times more likely is nuclear annihilation within 100 years. Sorry.
Also, Moore's Law is running out of time. Chips are already more energy-dense than the sun.
New advances will come in the form of alternative media (biological, massively parallel synthetic, quantum, (photonic/feyman?)), and will require new ways of thinking about computing. There are plenty of great ideas that have already been researched, but the cost of bringing them to market is extremely steep when traditional hardware gets the job done.
You'll see cheaper -- more ubiquitous -- machinery in coming years, but the problems of complexity are still wide open.
But keep in mind that economic realities hinder the development of new technology, as well as limit access to technology. Everything has a price.
On September 28 2006 18:30 travis wrote: Just imagine the behind the scenes technology that exists in the millitary. From reading Ray's Book(he's one of 3 or 4 advisors to the millitary regarding the budgeting of money for technological research), I know that the millitary is very close to having nanotech shots that can enhance muscle performance for soldiers, allowing them to carry huge guns or machinery for long distances. And this kind of technology is just the start.
STIMPACKS!!! It will be OP in real life too ^^
I still don't thing we will reach the singularity. As people in the thread previously have stated there might be a machine that can reason faster than a human, but never reason in a way that a human cannot.
Whoa cool bump. This is a very interesting topic for me as well....so many possibilities in the future. Though AI, while certainly possible, will probably take a while to be able to achieve as we imagine it.
However there is the distinct possibility that there may not be enough latent on energy obtainable on the Earth (i.e. the sun/available resources on our planet) for us to be able to do certain things, e.g. be self-sustaining over a prolonged period of time, or be able to travel to another habitable planet, or create AI.
I still don´t know what he means by singularity. I could gather it is some form of future utopia. I find it more likely(and safe) to increase human memory with actual computer parts. Human augmentation all the way(Deus Ex>Metal Gear Solid). Imagine a world where you wouldn´t forget things you were just thinking about. Human creativity is way beyond anything any AI is ever expected to gain. However the human brain definitely has flaws.
So the singularity, what is that supposed to mean exactly? That we get to a point where we cannot improve further or what?
On October 03 2012 12:08 Mataza wrote: I still don´t know what he means by singularity. I could gather it is some form of future utopia. I find it more likely(and safe) to increase human memory with actual computer parts. Human augmentation all the way(Deus Ex>Metal Gear Solid). Imagine a world where you wouldn´t forget things you were just thinking about. Human creativity is way beyond anything any AI is ever expected to gain. However the human brain definitely has flaws.
So the singularity, what is that supposed to mean exactly? That we get to a point where we cannot improve further or what?
It is supposed to mean that humans will be entirely replaced by machines. With technology slowing down however I don't see it happening anytime soon. CPU cores are NOT getting much faster now. Moore's law no longer works for single cores, and simply adding more cores requires more energy and space. So this kind of growth will stop or at least dramatically slow down soon.
Quite a lot of tech is even getting reversed. We are no longer flying to the moon. We don't even have Concords anymore. Energy is getting more expensive, since low hanging fruit of cheap fossil fuels (especially oil) has already been burned. What remains is harder and slower to get. We don't even have the expertise in nuclear energy anymore: the proponents of thorium or fusion will tell you how frustrated they are by barely any support in these areas.
Is singularity possible? Perhaps... but definitely not this century. Likely not the next one either.
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
Don't be silly. Every time you read a book you create something smarter than yourself.
I don't create something smarter than myself, i simply Become smarter than i already was .
really important point here: A book ISN"T SMART. A book, by itself can do Nothing. It is only when I, the reader, am able to look at this book does the information it holds become something that can be used.
This is the same for computers in the sense that they are not real. A computer , no matter how smart it is, will NEVER go, unless we say "go". this is completely un true about humans who are SELF DETERMINED and can make choices for ourselves (even if we use the same process' as computers (eg 'gates'))
So the singularity, what is that supposed to mean exactly?
When "all" of humanity units itself... when "all" of us become one... when we all see the light... when 100% of people are happy. when we all understand everything + life + more.
I find it more likely(and safe) to increase human memory with actual computer parts.
So you mean like the 500 gigabyte harddrive on my computer? where i can store 10000000000000 definitions and statements and never have to look back? Or do you mean the internet, where i can type anything into google and get 10000000 results??
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
Don't be silly. Every time you read a book you create something smarter than yourself.
I don't create something smarter than myself, i simply Become smarter than i already was .
So you claim that, at t1, you are identical to yourself at t2?
(edit: why would anyone want to bother making an artificial intelligences when there are already so many intelligences running around)
the thing I want to talk about right now is this : THE INTERNET!!!!!!!!!!
Do you realize that we are communicating using mostly our minds?
I am writing this down... and now you can read it and respond... what the fuck? we don't even know each other or have ANY idea where the other person is , and yet we can TALK and DISCUSS and LEARN -- RIGHT NOW!!!!!!!!!
LOOK AT REDDIT: literally millions of people log in and COMMENT , this is Undeniably a conversation happening at the scale of 100,000 people. W - T - F.
I dream of a day, where everyone goes on the internet at once
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
Don't be silly. Every time you read a book you create something smarter than yourself.
I don't create something smarter than myself, i simply Become smarter than i already was .
So you claim that, at t1, you are identical to yourself at t2?
On October 03 2012 17:02 xmungam wrote: LOOK AT REDDIT: literally millions of people log in and COMMENT , this is Undeniably a conversation happening at the scale of 100,000 people. W - T - F.
Ah, yes, but what are the discursive characteristics of this medium? (I think it is too much noise)
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
exactly, we would never be able to create something smarter than ourselves.
Don't be silly. Every time you read a book you create something smarter than yourself.
I don't create something smarter than myself, i simply Become smarter than i already was .
So you claim that, at t1, you are identical to yourself at t2?
On October 03 2012 16:48 xmungam wrote: When "all" of humanity units itself... when "all" of us become one... when we all see the light... when 100% of people are happy. when we all understand everything + life + more.
This is not really correct, nor is the OP which says the singularity is when artificial intelligence surpasses our own. What it really is, is the same as a singularity in space: A point where our logic breaks down and we can't understand what's beyond it. The idea is however best exemplified by artificial intelligence... if we make an AI which can make itself smarter, and it does so exponentionally, there will eventually be a point where it's so smart that it's beyond our comprehension, and it will STILL develop exponentially, so we will never be able to catch up.
It doesn't have to be AI though. You could make the case that the internet is a form of singularity, people without the concept can't really understand all that is possible when it's there, it changes everything.
The idea about "utopia" and everything being awesome after a singularity event is just an assumption based on the idea that IF an AI would reach the point where we can't understand how it develops anymore and it's still developing exponentially, it will be able to do things for us we can't do ourselves and it could solve our problems in ways we can't possibly imagine.
It's a nice thought, but then, people thought the year 2012 we'd have rocket boots and robot butlers. Computers are the shit right now so when people picture the future, they just extrapolate current trends. In reality what happens is computer development slows down due to inherent limitations while some unexplored field suddenly bursts with development, enriching life in very basic but practical ways. My money is on genetics and biotech. People are already "connected" to an inane degree, it's time to focus on smart bacteria and genetic enhancement.
I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.
The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats. But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.
In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology.
On October 03 2012 18:44 Destructicon wrote: I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.
The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats. But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.
In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology.
That's the whole point. The AI improves exponentially, when it reaches limitations, it creates new technologies to improve it self. It evolves, but it evolves faster than we, exponentially, so the gap becomes bigger and bigger. The idea of the singularity isn't based around infinity or lack of limitations, it's based around our level of understanding, evolution and progress, and the idea that something which progresses faster will eventually be beyond our understanding.
You could think of it as a god concept: Every "entity" beyond the singularity is like a god to us. We can't understand them, what they are doing, why or how.
I was recently in a open panel with some professors/ Tech industry guys and the thing that they did agree on is that 30 years is a very very optimistic estimation. Basically its not a date based on how close we are but more a date based on the rate of technological discoveries and general scientific advancement. As far as it being a good thing, the jury is still out on that one.
Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
On October 03 2012 19:02 Sanctimonius wrote: Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
Well in general scientific advancement is a good thing, but its still early to know if it is a good thing, thats why its called a singularity as no one knows what beyond that point. As far as your second fear, its not realistic the easy answer is Issac asimov's 3 laws. There are more complex reasons also, and the majority professional consensus is that its not a practical fear.
On October 03 2012 19:02 Sanctimonius wrote: Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
it is not a good thing at all, it would only be safe for humanity if we invested more money into research of safe AI than the actual production of AI itself. Otherwise what you are saying is the most likely possibility. By the time it happens, the hardware will be sufficient for it to make billions of copies of itself, and learn EVERYTHING over night. Given how radically dependent humanity is on machines, it will be the end of us at that point.
"There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.
Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem." - Bill Gates
On March 25 2015 00:35 Roe wrote: This is a bit of a cult isn't it?
somewhat
its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity
as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons
On March 25 2015 00:35 Roe wrote: This is a bit of a cult isn't it?
somewhat
its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity
as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons
The problem with using AI to enslave people is "Why?" If you have AI smart enough to enslave people, it's also smart enough to do any job you'd need those slaves for.
I think it'll end up being like the Federation in Star Trek. Most people would be content just being pampered by the AI, while a few people wouldn't, and would go on to do great things.
humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.
On March 25 2015 04:29 l3loodraven wrote: humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.
I am also of that opinion but it isn't set in stone. Once technology is advanced enough (hundreds of years) humans no longer serve a purpose. The raw materials taken up by them could be better used in other ways if you only go from a logical standpoint.
i dont think this will take 15-30 years. there are still major and complex issues regarding how to actually use the computational power, assuming that it progresses smoothly (i think it will hit the size and error checking ceiling until more breakthrough in physics/chemistry)
so i agree with others that say its more of the software problem (and even architectural), which is NOT progressing at the speed of moore's law
The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Consciousness is just a consequence of sensory inputs, neuron anatomy, neuronal wiring, peptide signalling, synaptic plasticity, hormonal states and neuronal integration. It is not a mechanism per se.
Getting this right depends on literally billions of molecular parameters that are the result of evolution. Each nerve cell has its unique composition and identity due to transcription factors that define the expression patterns of genes. Each nerve cell is connected to 10-20 other nerve cells via 1000s of synapses. Each. Single. One. We have 100 billion nerve cells. That's 1000 trillion synaptic connections.
Each synaptic connection has its own meaning as a result of millions of years of evolution. And the whole thing is highly adaptive: synaptic connections can become stronger and weaker, even whole new neurons can form in special regions.
It is quite clear that the von Neumann architecture cannot even begin to capture this complexity. To build a computer like our brain, we would have to first understand our brain entirely. However, that is not going to happen within the next 100 years.
There are attempts to mimic the architecture of our brain (see IBM's brain-like chip), however these are really nothing in comparison to what's really going on.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Some sort of military robot turning against it's owner is quite the stretch to technological singularity.
On March 25 2015 18:23 excitedBear wrote: To build a computer like our brain, we would have to first understand our brain entirely.
But in theory if you were able to scan the entire brain and simulate it within a computer program you would only have to understand how it works generally, rather than precisely what every single synapse connection means. Sure, there could be other unforseen complications so obviously nothing is for sure, but reverse engineering the brain can come after.
Of course the scanning equipment and computer power aren't there yet. But they're working on it.
On March 25 2015 17:11 Biff The Understudy wrote: The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.
The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.
The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.
Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.
I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.
It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).
I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.
Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...
It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
Some sort of military robot turning against it's owner is quite the stretch to technological singularity.
Oh no, I was saying provided we avoid WW III that they have a better chance of turning against us with a "dumb" AI before they were "smart like us".
The emphasis being on AI not needing to be 'like' our brains to practically perform adequately to be a significant threat.
I'm not convinced the way our brain functions is the 'best' way to think either. So I'm not sure AI has to think like we do for it to be practically more intelligent in many ways. Ant's aren't extremely intelligent but they can get quite a bit done and will probably be here after we're gone.