Technological Singularity - Page 2
Forum Index > General Forum |
Deleted User 3420
24492 Posts
| ||
Physician
![]()
United States4146 Posts
~ ATTCCC ~ | ||
Deleted User 3420
24492 Posts
please say something people will actually understand | ||
xmungam
United States1050 Posts
at this point i would say we are very very close to a singularity. i want to know Teamliquid.net's opinion on starting some sort of global society, (via the internet?) i think this is the next step for humanity. i had a crazy crazy weekend and i had some really crazy thoughts (obviously) . I honestly think we may be about to witness a second coming of christ or whatever. my friend posted this : http://singularitysummit.com/schedule/ and you can't deny that things like reddit are actually communication on the scale of 100 THOUSAND PEOPLE AT A TIME quetsion: does the growth of Teamliquid.net represent that of real life / all of humanity? (to scale) I posted this a while ago: http://www.teamliquid.net/forum/viewmessage.php?topic_id=322959 (really high at the time) (not right now) the reality is is that about 30% of the world KNOWS what the singularity is. that means for every 1 person who knows, they have to tell 2 other people I realize this is a ridiculous bump, and my post itself is kind of bull shit but I think this is a topic TL.net needs to address OFFICIALLY. PS I think i might be one of the most attractive people in the world... and i also might be psychic. i can't fucking believe you've known about this for 6 years travis... wtf dude thats so long. I started thinking of this stuff only a year or two ago i guess. AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE | ||
HowardRoark
1146 Posts
http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link: What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level. | ||
Kickboxer
Slovenia1308 Posts
![]() | ||
mememolly
4765 Posts
no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below | ||
zefreak
United States2731 Posts
| ||
zefreak
United States2731 Posts
On October 02 2012 17:58 mememolly wrote: Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below http://youtu.be/bm6ScvNygUU That's a big if. | ||
DonKey_
Liechtenstein1356 Posts
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link: http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link: What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level. What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate. | ||
Jumbled
1543 Posts
On October 02 2012 17:10 xmungam wrote: AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE Are you sure you weren't looking for the High Thread? While the singularity is interesting idea, there's no reason to believe it's actually imminent, and it certainly has nothing to do with Christian mythology. | ||
mememolly
4765 Posts
for the alternatives to achieve the things silicon chips can do is miles off, it's not like we just switch to molecular computers and technological growth spurts again, it will take until the end of the century to achieve anything significant non-silicon | ||
Zrana
United Kingdom698 Posts
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link: http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link: What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level. I disagree with this; don't think it's been really thought through. There's way too much apocalyptic sensationalism going around when people talk about AIs. I have no idea why people seem to take terminator films so literally...... The premise here is that an AI eventually would want to dominate the world, and if it was capable of it, it would succeed. This is another case of humans thinking computers think like humans. We are driven by survival instincts to reproduce and make sure our offspring are as successful as possible. This is the result of millions of years of dna being copied and recopied with small differences cropping up every often. The DNA with code for characteristics that allow it to reproduce most effectively is the DNA still around today. This article assumes a similar thing would happen with technology, except that technology doesn't reproduce in quite the same way as biology. However I think the article misses a basic fact that humans think in terms of war and domination because we are simply vehicles for our genetic code which commands us to survive and reproduce. If the first truly intelligent AI is ever created, it would not have this purpose. It would in fact have no purpose at all other than what tasks are set it. It wouldn't even have the need to survive, other than to perform whatever tasks it's given. It would have no fear of death as fear of death arises from our need to reproduce and protect offspring. So for an intelligent AI to try to dominate the world a number of conditions must be met (assuming a human doesn't simply design an AI with the goal of world domination): 1. Multiple AIs would need to be created 2. At least some of these AIs would have a survival instinct 3. Some of those AIs would feel that their survival is threatened by other AIs or humans 4. Some of those would feel that the best course of action against no.3 was attacking (not peaceful resolution) 5. Attacking would be done via hardware, not software 6. AIs wishing to attack via hardware would have a physical way to manifest their will (killer robots, nanobots etctec) 7. Any AI emerging victorious would have to feel that all humanity was in some way a threat to it. To compare to biology again, the leap from life arising to life attacking other life took an incredibly long time and that was with the life already able to act in the physical world. I don't think that humans will create an exponentially increasing intelligence and then immediately be wiped out. Simply having an instinct to survive would be unlikely, and even if it did then why shouldn't it be content (though "content" is not the right word if it is emotionless) simply existing? We are also overlooking the decisions a being vastly more intelligent than humans might make. Human morality is derived from our evolution so we have no idea what kind of (if any) morality an AI will have. Maybe none, maybe something left over from an earlier iteration of itself when it was given some command by humans, maybe it would inherit our morality or perhaps it would know some kind of higher morality that we can't yet think of. If indeed an AI did develop in a hyper-accelerated evolution like that article says, then how is it different from a new species arising or indeed a new nation? There is only war if one side feels threatened or wants something the other side has, otherwise peace is the better course of action. | ||
Acetone
United States200 Posts
On September 28 2006 19:36 travis wrote: It's very vague because I wanted to write a few paragraphs, not an entire book. AI already has learned things that humans don't know. I'm pretty sure it was AI that has solved pie to as many digits as we currently know. There, that's an example. I don't think determining the value of pi to tens of trillions of digits is what BlackJack was referring to as "things that humans don't know," and I agree with him. In terms of an AI discovering "new knowledge," I think along the lines of a completely new idea, or at the very least, something more creative than determining extra digits of a number that humanity has known about for perhaps over 4,000 years. Also, AI have had nothing to do with calculations of pi (at least ones that have been made public). Computers (built and developed by people) using infinite series and iterative algorithms (developed by people) are what we have to thank for the increasing precision of recent (in the history of pi, the mid-1900s is very recent) calculations of pi. | ||
xmungam
United States1050 Posts
On October 02 2012 17:31 HowardRoark wrote: If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link: http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link: What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level. you, howard roark, have brought up an interesting point on artificial intelligence, but i for one don't believe a computer will ever be smarter than humanity. i think the real benefit from computers comes in the form of Linkage. we live in the age of information, where you can literally type in any question and get an answer. I think that by linking all of our brains (via the internet) we can solve every problem. Trying to create an AI to do this would take way to long and not work / we would all die. Also i agree with Kickboxer. Humans will always be better than computers. If you are old you don't understand, the new generation is coming. the #1 fastest growing type of information is computer literacy and technology, so think about what the world would be like if you started learning to code in 1st grade (Which is reality now for the 1st graders) The people who are graduating from college now will change everything. The reason I bumped this thread was NOT To talk about the "technological singularity" , it was to talk about "THE SINGULARITY" , does this mean i should create a new topic on tl.net for it? | ||
summerloud
Austria1201 Posts
if you believe stuff a lá ray kurzweil and all these other utopists you might just as well go back to reading popular science magacines and believe the shit they are promising you for the next 5-10 years to believe ANYTHING can keep on growing exponentially is just plain silly. leave it to economists and futurologists to fall for that... http://www.smbc-comics.com/index.php?db=comics&id=1968#comic oh and i feel like i have to puke every time i see this michio kaku guy with his overblown ego simplifying things for the layman and putting everything in a way that will get as much attention as possible. typical US TV personality... | ||
Maxd11
United States680 Posts
| ||
HowitZer
United States1610 Posts
| ||
RageBot
Israel1530 Posts
On October 02 2012 18:28 DonKey_ wrote: What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate. How can we stop it from getting over the limits we've set for it? | ||
xmungam
United States1050 Posts
On October 03 2012 08:14 HowitZer wrote: A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive. exactly, we would never be able to create something smarter than ourselves. also think about the possibility of linking up all human minds, that would be so much more powerful than any stand alone computer we could build. what makes tools powerful is the user, this applies to weapons And the internet. | ||
| ||