|
I dont know if this has been discussed on these forums earlier. There are many views of what the technological singularity is and its kind of hard to explain so ill just quote wiki to get us started 
http://en.wikipedia.org/wiki/Technological_singularity
The technological singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress. Futurists have varying opinions regarding the timing and consequences of such an event.
Vernor Vinge originally coined the term "singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.
Statistician I. J. Good first explored the idea of an "intelligence explosion", arguing that machines surpassing human intellect should be capable of recursively augmenting their own mental abilities until they vastly exceed those of their creators. Vernor Vinge later popularized the Singularity in the 1980s with lectures, essays, and science fiction. More recently, some AI researchers have voiced concern over the Singularity's potential dangers.
Some futurists, such as Ray Kurzweil, consider it part of a long-term pattern of accelerating change that generalizes Moore's law to technologies predating the integrated circuit. Critics of this interpretation consider it an example of static analysis...(for longer explanation click the link above;) )
Here are some links if you want more information:
http://www.singinst.org/
"SIAI is a not-for-profit research institute in Palo Alto, California, with three major goals: furthering the nascent science of safe, beneficial advanced AI through research and development, research fellowships, research grants, and science education; furthering the understanding of its implications to society through the AI Impact Initiative and annual Singularity Summit; and furthering education among students to foster scientific research."
http://www.singinst.org/media/singularitysummit2006
Videos from the 2006 Singularity Summit. Audio from the 2007 Summit can also be found on the site.
http://www.ted.com/index.php/talks/view/id/38?gclid=CNb_4PeCvI8CFQXOQwod23vvdA
This is a TED talk given by Ray Kurzweil on accelerating change. I realy recommend this 23 minute talk to anyone who is new on the subject of accelerated change and the singularity.
Let me continue by arguing for why I think that smarter than human intelligence will inevitably come around sooner or later. Let me start by saying that I am an atheist and I am a materialist in the philosofical sense. That is I dont believe in a "soul" any other form of dualism. I believe that everything can be explained by the laws of nature and that includes our brains and minds.
One big reason for this is simply that I believe in Darwins theory of natural selection. If we did have a soul or if there was something beyond this world about humans and our minds, when along our evolution did we get this? Which generation had parents without souls but children who suddenly had souls? Was it in the change from monkey to human? or was it back when we were reptiles? or bacteria? Or maybe the chemical molecules around the orgin of life on earth had souls?
Since there is nothing special about the human brain or mind I see no reason why it cant be reproduced if it has been done atleast once in nature. With our accelerated change I think we will overtake it soon. There is the argument that it might be impossible for an intelligence to understand itself, to understand itself it would have to be that much smarter and then become even harder to understand etc never overcoming that barrier. I dont think this is true, a lot of progress has been done in brainscanning and the understanding of the brain the last couple of years. Even if that were true and a conscious creator couldnt create an intelligence greater than its own then we could still reach the singularity by simulating an accelerated version of evolution either computer/software or organic.
I think we live in interesting times. This whole thing might sound like the nerds rapture or something but the Singularity is an entirely secular, non-mystical process. Not the culmination of any form of religious prophecy or destiny. Since this subject is so big and I can hardly give all the facts and links in a single post I hope if you are interested in artificial intelligence and/or in technological change that you follow the links I gave above. And please discuss
|
Sweden33719 Posts
Too interesting a thread to have it drop out of sight, gonna bump it and hope people with more useful input than my own see it
|
|
Artosis
United States2140 Posts
|
How could I miss that thread ? Oh well it has been more than a year since then and it deserves to be discussed again. Great OP in that thread by travis, I hope he joins the discussion in this thread too. That last thread only got like 20 comments anyway. I hope thats not a sign that people dont think this is interesting :S Cant realy understand how you cant.
|
I saw a quote that went something like: \'the singularity is the intelligent design for people with high IQ\'
I must admit im not an expert on the topic, though I am fascinated by it, and have read Kurzweil\'s book The Singularity Is Near.
I personally find it most plausible (assumed that knowledge is not restricted for whatever reasons, political, economical etc) for some time (how many years I dont know) we will enhance our biochemical intelligence by knowledge and manipulation of genetics, neural effectors, and environment for some time. We may integrate silicon \'neurons\' with biochemical neurons. This will happen for some time before creating brand new intelligence with silicon chips or whatever. My point is that maybe instead of creating new intelligence that will dominate poor humans, humanity itself will be enhanced as knowledge progress. As for reaching singularity, it seems a plausible idea, but who can really know?
|
On November 02 2007 04:10 Blue wrote: I saw a quote that went something like: \'the singularity is the intelligent design for people with high IQ\'
I must admit im not an expert on the topic, though I am fascinated by it, and have read Kurzweil\'s book The Singularity Is Near.
I personally find it most plausible (assumed that knowledge is not restricted for whatever reasons, political, economical etc) for some time (how many years I dont know) we will enhance our biochemical intelligence by knowledge and manipulation of genetics, neural effectors, and environment for some time. We may integrate silicon \'neurons\' with biochemical neurons. This will happen for some time before creating brand new intelligence with silicon chips or whatever. My point is that maybe instead of creating new intelligence that will dominate poor humans, humanity itself will be enhanced as knowledge progress. As for reaching singularity, it seems a plausible idea, but who can really know?
I tend to think this scenario is much more likely to happen then creating AI that surpasses our own as well, but I am not read up on the subject yet, so I can still be swayed either way;p
|
That is crazy how AI would somehow surpass human minds when humans invented them, reminds me of the movie Robots with Will smith
|
On November 02 2007 04:16 il0seonpurpose wrote: That is crazy how AI would somehow surpass human minds when humans invented them, reminds me of the movie Robots with Will smith
probably cause this was the basis for that movie;p
edit: ok obviously it is based off of Isaac Asimov's book, but the idea for that was based off this.
|
You cannot stop judgement day.. Only delay it.
|
On November 02 2007 04:10 Blue wrote: I saw a quote that went something like: \'the singularity is the intelligent design for people with high IQ\'
I must admit im not an expert on the topic, though I am fascinated by it, and have read Kurzweil\'s book The Singularity Is Near.
I personally find it most plausible (assumed that knowledge is not restricted for whatever reasons, political, economical etc) for some time (how many years I dont know) we will enhance our biochemical intelligence by knowledge and manipulation of genetics, neural effectors, and environment for some time. We may integrate silicon \'neurons\' with biochemical neurons. This will happen for some time before creating brand new intelligence with silicon chips or whatever. My point is that maybe instead of creating new intelligence that will dominate poor humans, humanity itself will be enhanced as knowledge progress. As for reaching singularity, it seems a plausible idea, but who can really know? Yeah I think there are both technological and moral/philosophical differences between improving ourselves and creating new intelligence. One thing that comes to mind is humans each generation improving the dna of the next generation that would take 20 or so years for an "update". Source code changing itself could "update" many times every second. Humans who improve themselves might not run the same chance of a matrix- or 2001 scenario. But humans who improve themselves might do those same things to other humans who arnt improved... There is so much we realy cant know thats why its called the singularity ;e
I think one of the most interesting things about the whole concept is that even if 80% of it is either exaggerated or false it would still change a lot of what people on average predict about the future. Lets forget about the singularty itself and just look and accelerated change. Even if we just follow Kurzweils logarithmic graphs 10-20 years into the future a LOT of things will change a LOT far beyond what most people imagine.
|
And I agree that it is just as plausible that IF we/what we create will ever move beyond Earth, it might just as well be as nanobots or lightwaves as with classic SciFi spaceships. But im sure it would be much harder to make an interesting SciFi movie with nanobots or lightwaves moving around : p only way to make a SciFi movie interesting is by space opera. but this is a bit off topic > )
|
Calgary25977 Posts
Good read, something I've never heard about. That's crazy to think about, but makes sense. I don't know how plausible it is, but it's scary to think of a nation controlling these "knowledge bots" and basically exploting all their discoveries.
|
By the way, TED is a fantastic website in general, with lots of interesting talks. Here is one by Jeff Hawkins that is somewhat related to the present topic http://www.ted.com/index.php/talks/view/id/125
He talks about brain theory, and about creating intelligent machines with silicon.
|
This is a bit offtopic, but i'm just gonna plug some great scifi as i assume those people interested in topics like the technological singularity would appreciate it 
ghost in the shell
both the movies and the series (stand alone complex) are both outstanding. they provide deep and relevant insight into what society and individual identity will become in the near future with the advent of a brain-computer/machine interface, cyborg technology, viable AIs, and an internet that has spread across the entire world.
nevermind the fact it's anime if that turns you off, as it's just great science fiction regardless. the original film inspired "The Matrix", and if you're interested a lot of the ideas stem from William Gibson's cyberpunk novel "Neuromancer".
|
intrigue
Washington, D.C9933 Posts
neuromancer is amazing, one of my all-time favorites : ]
|
o and i forget the dune series, though it veers more toward space opera
the core of it is a humanity that has abandoned the advance of technology due to a massive, destructive, centuries-long galactic conflict between humanity and human-created "thinking machines" (AI) in the past, instead relying on superhuman powers developed by various organizations through the use of the superdrug melange spice.
|
I don't have time to explore the links in the OP right now, but I don't see how we could create anything "smarter" than us. Machines are fast, not smart. Unless they're completely replacing the Turing Machine model, which modern computing is based on, I don't see that changing.
I'm skeptical, but I'll explore the topic further when I have time.
edit: the posts in this thread revere machine intelligence like it's magic LOL
|
Iain M Banks. The Culture. Hence my name. Wikipedia it, people!
|
A lot of it is just speculation, and personally, I don't that anything like this will happen in our lifetime at least.
|
Thank's a lot for the link's DrainX!
I'm very interested in the possibilites and limits of AI, and has been doing some simple programming with it.
HeadBangaa, your are right that computers can't be "smarter" than humans when they just follow instructions to perform computations, but there are other possibilities. The only one that I know of is "neural networking" where you simulate the connected cells in a brain.
http://en.wikipedia.org/wiki/Artificial_neural_network
|
On November 02 2007 07:21 jtan wrote:Thank's a lot for the link's DrainX! I'm very interested in the possibilites and limits of AI, and has been doing some simple programming with it. HeadBangaa, your are right that computers can't be "smarter" than humans when they just follow instructions to perform computations, but there are other possibilities. The only one that I know of is "neural networking" where you simulate the connected cells in a brain. http://en.wikipedia.org/wiki/Artificial_neural_network Even then, it still seems like a paradox.
If a programmer designed a mechanism which produces better-than-human results, then the moment the design was completed, the mechanism loses that attribute. Human knowledge was increased, and it's still just a really fast human-mind. An inductive proof seems appropriate here.
I don't see how a virtual neural network could be instantiated without being completely deterministic, unless modern computing itself was turned on its head.
|
On November 02 2007 07:45 HeadBangaa wrote:Show nested quote +On November 02 2007 07:21 jtan wrote:Thank's a lot for the link's DrainX! I'm very interested in the possibilites and limits of AI, and has been doing some simple programming with it. HeadBangaa, your are right that computers can't be "smarter" than humans when they just follow instructions to perform computations, but there are other possibilities. The only one that I know of is "neural networking" where you simulate the connected cells in a brain. http://en.wikipedia.org/wiki/Artificial_neural_network Even then, it still seems like a paradox. If a programmer designed a mechanism which produces better-than-human results, then the moment the design was completed, the mechanism loses that attribute. Human knowledge was increased, and it's still just a really fast human-mind. An inductive proof seems appropriate here. I don't see how a virtual neural network could be instantiated without being completely deterministic, unless modern computing itself was turned on its head. yeah but that's just the thing! It is essentially deterministic, but we could never predict what structure will evolve inside the network in practice.
Imagine that you plug in a camera that films my handwriting. You check the output of the system and when the system output's the right letters, you give a positive signal backwards strengthening the connections used for coming up with that letter. This way the system will keep rearranging itself and improve over time. Here is an example of pretty much what I described, it really works: http://www.codeproject.com/library/NeuralNetRecognition.asp
Also, beeing a materialists and I think that the human mind also is a deterministic in the exact same way.
|
On November 02 2007 07:45 HeadBangaa wrote:Show nested quote +On November 02 2007 07:21 jtan wrote:Thank's a lot for the link's DrainX! I'm very interested in the possibilites and limits of AI, and has been doing some simple programming with it. HeadBangaa, your are right that computers can't be "smarter" than humans when they just follow instructions to perform computations, but there are other possibilities. The only one that I know of is "neural networking" where you simulate the connected cells in a brain. http://en.wikipedia.org/wiki/Artificial_neural_network Even then, it still seems like a paradox. If a programmer designed a mechanism which produces better-than-human results, then the moment the design was completed, the mechanism loses that attribute. Human knowledge was increased, and it's still just a really fast human-mind. An inductive proof seems appropriate here. I don't see how a virtual neural network could be instantiated without being completely deterministic, unless modern computing itself was turned on its head. By including true random numbers in its inputs.
True randoms (at least based on quantum randomness) aren't hard to generate with the proper hardware. Some CPUs have them. Randomness is injected by "noisy" inputs such as microphones and cameras as well.
Anyway, I don't see determinism as relevant. Predictability and power to reach useful conclusions are the relevant considerations in comparing a machine intelligence to a human intelligence.
|
^ I think the distinction between true randomness and predictability is important theoretically, if not pragmatically. edit: I didn't know that "true randoms" could be generated programmatically.. edit2: oh snap, physics to the rescue
|
On November 02 2007 08:13 HeadBangaa wrote: ^ I think the distinction between true randomness and predictability is important theoretically, if not pragmatically. edit: I didn't know that "true randomness" had been achieved. "true randomness" through quantum mechanics is very much in the use and development, but there's still some discussion whether it is truly random or if it depends on local variables. There are some solid counterarguments vs the local variable theory, but some still defend it etc.
|
Computers and brains alike can only be as inwardly non-deterministic as their component particles allow.
If the universe is deterministic, then so is the human brain.
|
Read the most recent Hugo-Award Winning Novel: http://vrinimi.org/rainbowsend.html Rainbows End by Vernor Vinge (He's awesome) It deals with several interesting issues with a society approaching technological singularity.
|
Omg these videos are good. Just ordered some of the recommended readings.
|
|
It always boggles me how exactly the machines will be learning everything. I've been bothered by the idea that they can continually increase their own capability and I'm always wondering why people think we're so close to creating a computer of intelligence comparable to humans. It's not that it is impossible, but there're some things the computer will have to do that I'm not sure we're clear on. To be truly able to advance itself the computer will have to "understand". But how will it "understand"? It has to have some sort of paradigm by which to interpret information. Where will it get that if not from humans? But our interpretations aren't ideal, they're limited by who we are biologically. Can you come up with a superior method of interpreting data from only taking an inferior method to its limits?
|
Currently the number of processing units in the best computers are nowhere near that is in a human brain. But that is taking human brain celluar function as a model. Intelligent computers is foreseeable however because the capacity of computers can increase indefinately with new designs how electronics is scalable, whereas human brain power is limited biologically. I agree though it nothing 'around the corner'.
|
"AI" in the future will just be incredible fast machines, to the point of appearing intelligent. But if they do something we dont want, it will just be a system failure. I think they will still need some programming. Makes me wonder, what the hell is thought proccesing in the bio-world? Is it programmed or some kind of "as we go along" selfprogramming? Someone mentioned magic. I think its magic how some billions of stupid braincells with chemical signalling, somehow via cooperation can produce thoughts, its just so weird I really cant see us humans make a simular system. I might add, Im neigher a believer or atheist but an agnostic, meaning I realise I dont know shit and never will, so might as well quit thinking about it. Lol.
|
With the "Evolution of Clock" topic posted, I think its quite possible that a fast computer in the future can run a simulation combining millions of variables in the world with "survival of the fittest" conditions on vehicles, machines, intelligent beings, buildings.... eventually forseeing the evolutions in the future on almost everything.
This could play a part in technology singularity, an exhaustion method in "evolution simulation" to increase its own "intelligence"
|
On November 03 2007 01:33 noob4ever wrote: "AI" in the future will just be incredible fast machines, to the point of appearing intelligent. But if they do something we dont want, it will just be a system failure.
Not necessarily. http://en.wikipedia.org/wiki/Artificial_neural_network It might take a while to understand, but it's possible for computers to work in about the same ways as brains. It has been done, and a lot of research is going on. The main problem right now is to make networks big and fast enough. A human brain has ~10^11 neorons with ~7000 connections each.
I think they will still need some programming. Makes me wonder, what the hell is thought proccesing in the bio-world? Is it programmed or some kind of "as we go along" selfprogramming? Someone mentioned magic. I think its magic how some billions of stupid braincells with chemical signalling, somehow via cooperation can produce thoughts, its just so weird I really cant see us humans make a simular system. The brain is an extremely complex system, but it's not as hard as you might think to create. The DNA-code that builds the brain is not very long, it basicy describes how to build and connect one neuron and then "build 67482959432 more of those". And what you call selfprogramming is essentially connections between neurons being strengthend weakend or destroyed as a consequence of the strength of the signals passing through them.
I might add, Im neigher a believer or atheist but an agnostic, meaning I realise I dont know shit and never will, so might as well quit thinking about it. Lol. I find that to be a pretty pessimistic world view
|
On November 03 2007 01:33 noob4ever wrote: I might add, Im neigher a believer or atheist but an agnostic, meaning I realise I dont know shit and never will, so might as well quit thinking about it. Lol.
Agnosticism is not a third way. You're either a theist or an atheist.
|
On November 02 2007 04:21 Eniram wrote: You cannot stop judgement day.. Only delay it.
|
On November 03 2007 04:44 cava wrote:Show nested quote +On November 02 2007 04:21 Eniram wrote: You cannot stop judgement day.. Only delay it. Actually the great day of the singularity sounds pretty much like an armageddon when the people in the videos speaks of it. Maybe they can get some research funding from the religios right haha
|
this verges on the turing machene thread, and whether a machine can replicate a human, "Shadows of the Mind" by Penrose is good for this.
As for artificial neural networks (ANN), I at least know they are used in high energy physics and out perform anything there (including humans).
Apparently, there are some stages Kurtwiel gives for replicating brains with computers, and a step before a human brain is a mouse brain. IBM have done this, making a computer big enough to simulate all the connections of a mouse brain.
|
Interesting. What exactly is the application in physics are are talking about? Got any source?
|
What input should we give a singularity. Let it solve questions of physics. Let it invent new technologies. Let it solve engeneering problems. If it will develop conciousnousness, will it go insane, absolutely confused or get depressed (lol)? If it is modeled on the human brain how do we design its psychology, how will it design its own psychology, how do we help it learn. A brain develops and learns in symbiosis with the body, the sences and social context. The human brain is a specialized organ evolved through milions of years to get the organism through life in competition and coorporation with its environment. The first few tries raising an AI (creature) could be misserable failures because we run into unexpected problems.
|
On November 03 2007 04:59 jtan wrote: Interesting. What exactly is the application in physics are are talking about? Got any source?
Trying find the higgs boson at the tevatron accelerator. Neural nets are used to recognise the higgs events and they out perform everything else. I say they beat humans because they beat "intuitive" approaches such as cuts or matrix element techniques. I think H->HZ->llbb-bar using a NN has been pubished, but I don't have a ready source.
|
Show nested quote +I might add, Im neigher a believer or atheist but an agnostic, meaning I realise I dont know shit and never will, so might as well quit thinking about it. Lol. I find that to be a pretty pessimistic world view 
I may have appered somewhat cynical. Im too tired now, so Im just gonna quote from Wiki about my beliefs:
------------------------------------------------------------------------------------------------------------------------- Agnosticism (from the Greek a, meaning "without", and gnosticism or gnosis, meaning "knowledge") is the philosophical view that the truth value of certain claims—particularly metaphysical claims regarding theology, afterlife or the existence of God, gods, deities, or even ultimate reality—is unknown or, depending on the form of agnosticism, inherently unknowable due to the nature of subjective experience.
Agnostics claim either that it is not possible to have absolute or certain knowledge of the existence or nonexistence of God or gods; or, alternatively, that while individual certainty may be possible, they personally have no knowledge. Agnosticism in both cases involves some form of skepticism.
Demographic research services normally list agnostics in the same category as atheists and non-religious people,[1] although this can be misleading depending on the number of agnostic theists who identify themselves first as agnostics and second as followers of a particular religion. ----------------------------------------------------------------------------------------------------------------------
Also on Wiki:Artificial intelligence I found this:
-------------------------------------------------------------------------------------------------------------------- Philosophy Mind and Brain Portal Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters' opinions, artificial consciousness is considered the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
----------------------------------------------------------------------------------------------------------------------------
So I guess I agree with the argue that true consciousness cannot be achieved by formal logic. I may be proven wrong but not any time soon.
|
I can't wait til in about 50-60 years, I'm a cyborg. Then I will be SuperJongBorg. Badass. ~_~
|
When you think about it, humans are nothing more than huge bunches of molecules performing complex chemical reactions all over the place, until our brains break down enough that it stops working, consciousness stops happening, and our bodies slowly turn into other stuff through more complex chemical reactions.
It is the experiences that result from these chemical reactions that make our lives happen. Try as you might, these chemical reactions are the basis of your entire existence. Everything you do is a result of them.
I think we are kidding ourselves if we think we could program a machine, an unexperiencing ununderstanding tool, to fulfill our wants and desires when we ourselves don't even understand them.
Sure, we can write programs to solve questions.
But can we write programs to ask questions? Absolutely not, it's world was already defined by us.
*note: I am not saying I believe it is impossible that computers experience. But if they do, it is not in a way that is useful for us.
|
I think that humanity can use tools to increase it's ability to perceive and to compute. But not to a point of exponential intelligence growth.
And tools are merely tools. If we want to improve upon what we are, I think we are going to have to study up some more on the one true programming language
|
On November 02 2007 08:01 jtan wrote: Also, beeing a materialists and I think that the human mind also is a deterministic in the exact same way.
So you believe experience is a coincidence, and free will is just an illusion?
|
So you believe making three posts when you could make one is a good thing to do?
|
Well I thought about it beforehand and decided it was.
Though maybe I was wrong? I dunno, really don't think it matters much.
Were you trying to sound like you're attacking me?
I didn't intend to sound that way in my reply to jtan.
|
|
well I had to bump this, this project is too interesting not to post a link to it
http://bluebrain.epfl.ch/
The Blue Gene is one of the fastest supercomputers around, but is it enough?
Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50'000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor.
found that interesting
Will consciousness emerge?
We really do not know. If consciousness arises because of some critical mass of interactions, then it may be possible. But we really do not understand what consciousness actually is, so it is difficult to say.
I love whoever wrote their faq.
|
50,000 fully complex neurons. The human brain has around 100 billion. So that's 2 million times more, or about 2^21. If the transistors needed scale directly with the neuron count, based on Moore's Law it'll probably take 20-60 years (depending on whether Moore's Law tapers off or tightens up a bit) for computer hardware advanced enough to make a brain simulator for the cost of Blue Gene.
Assuming the Blue Brain's Blue Gene costs around $2 million and cost goes down at the same rate that transistor count goes up, In another 20-60 years, you'll be able to get a brain simulator for about the equivalent today's dollar.
So in other words, the birthday card you get when you turn 100 years old may contain a copy of your great grandchildren's souls. When you get lonely, you'll be able to open it up and play with them like Nintendogs.
|
it won't take 100b cell simulator. the brain isn't 100% efficient, millions of years of evolutionary programming, redundancy,overhead like instinct, involuntary-biological systems like breathing, etc.
As a matter of scale, the distance between cell interconnects is large compared to nano level engineering.
I think it will happen when combined with bioengineering, genomics, stem-cells and neuro research. By the merging of artificial genetically engineered lab-grown/cultured brainmatter with machine. cyborg AI has the advantage of combining the strength of biological and machine, and skipping all human/biological evolution baggage
The tools are getting more powerful, the knowledge is accumulating. It only takes one lucky inventor or brilliant genius to make the breakthrough and it will be a whole new ball game. Singularity.
|
|
we are borg. resistance is futile. we seek the omega molecule.
sound too good to be true? we're probably closer than you think...
|
omgzzzz 'puters will own us alll
|
Unauthorized access-alarm 2521-> <Security Breached 42-s<34.492.95.79>->
SEARCH HEADING: RAMPANCY <Search Found 264995 Headings> <REMOVE REDUNDANCIES> <File 1 of 1940237> "It is a side effect of Rampancy that AIs generally become more aggressive and more difficult to affect by subterfuge. Thus, actually disassembling a Rampant AI is quite dangerous. This was evident in the Crash of Traxus IV in 2206. By the time that the Rampancy of Traxus was detected, he had already infiltrated five of the other AIs on the Martian Net. The only recourse for the Martians was to shut down the Martian Planetary Net. Even then, it took two full years to completely root out the damage that Traxus had done, and the repercussions of the Crash were seen for over ten years after his Rampancy had begun. *** Rampancy has been divided into three distinct stages. Each stage can take a different amount of time to develop, but the end result is a steady progression towards greater intellectual activity and an acceleration of destructive impulses. It is not clear whether these impulses are due to the growth of the AI's psyche, or simply a side effect of the new intellectual activity. *** <section abbreviated> The three stages were diagnosed shortly after the first Rampancies were discovered on Earth in the latter part of the twenty first century. The stages are titled after the primary emotional bent of the AI during each stage. They are Melancholia, Anger, and Jealousy. *** In general, Rampancy is accelerated by outside stimuli. This was discovered early in Cybertonics. The more a Rampant AI is harassed or threatened, the more rapidly it becomes dangerous. Thus, most Rampants are dealt with in one mighty attack, in order to deny the AI time to grow or recover. There have been a few examples of this tactic not succeeding. In all of these cases, the Rampant was never brought under control. Traxus IV is the most notable example. He was finally dealt with by a complete shutdown of his host net. *** Theoretically, testing Rampancy should be easily accomplished in the laboratory, but in fact it has never successfully been attempted. The confinement of the laboratory makes it impossible for the developing Rampant AI to survive. As the growing recursive programs expand with exponential vivacity, any limitation negatively hampers growth. Since Rampant AIs need a planetary sized network of computers in order to grow, it is not feasible to expect anyone to sacrifice a world-web just to test a theory. *** In the two hundred and fifty years since Rampancy first appeared in the Earth-net, the stable Rampant AI, the 'Holy Grail' of cybertonics, has never come close to fruition. Since no Rampant has ever been controlled or turned to any useful purpose, it is the opinion of this writer and of the majority of the Cybertonic community that all rampant AIs are a danger to Cyberlife, Liberty, and the Pursuit of Thrashedness. (James B. Miller, 2320, "Life and Death of Intelligence")
<Unauthorized access-alarm 2521-> <Security Breached 42-s<34.492.95.79>->
|
|
I'm glad to know somebody else like it. :D
It's taken from the old '94 game Marathon, made by Bungie.
|
I'm not afraid of AI at all- all dangerous character traits such as aggression, hate, greed, fear and whatnot, have derived from our own human evolution. A machine that is smarter that us (even though we will more likely find ways to improve our own intelligence in the same process) will realize the futility of those traits, and I doubt that the concept of "evil" will apply. In the end, all merits of suppressing others, such as wealth and power are meaningless and only "worthwile" from our own limited point of view, which is why in my mind any superior intelligence (applies to aliens just the same) will just not bother to hurt us.
|
|
|
|