• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 19:10
CEST 01:10
KST 08:10
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Finals Preview: Arrival7TL.net Map Contest #21: Voting10[ASL20] Ro4 Preview: Descent11Team TLMC #5: Winners Announced!3[ASL20] Ro8 Preview Pt2: Holding On9
Community News
Merivale 8 Open - LAN - Stellar Fest0Chinese SC2 server to reopen; live all-star event in Hangzhou21Weekly Cups (Oct 13-19): Clem Goes for Four3BSL Team A vs Koreans - Sat-Sun 16:00 CET10Weekly Cups (Oct 6-12): Four star herO8
StarCraft 2
General
RotterdaM "Serral is the GOAT, and it's not close" Chinese SC2 server to reopen; live all-star event in Hangzhou The New Patch Killed Mech! Weekly Cups (Oct 13-19): Clem Goes for Four 5.0.15 Patch Balance Hotfix (2025-10-8)
Tourneys
Merivale 8 Open - LAN - Stellar Fest Tenacious Turtle Tussle RSL Season 3 Qualifier Links and Dates $1,200 WardiTV October (Oct 21st-31st) SC2's Safe House 2 - October 18 & 19
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 496 Endless Infection Mutation # 495 Rest In Peace Mutation # 494 Unstable Environment Mutation # 493 Quick Killers
Brood War
General
[ASL20] Finals Preview: Arrival Is there anyway to get a private coach? BGH Auto Balance -> http://bghmmr.eu/ BSL Team A vs Koreans - Sat-Sun 16:00 CET OGN to release AI-upscaled StarLeague from Feb 24
Tourneys
ASL final tickets help [ASL20] Grand Finals Small VOD Thread 2.0 [Megathread] Daily Proleagues
Strategy
Roaring Currents ASL final Simple Questions, Simple Answers Relatively freeroll strategies BW - ajfirecracker Strategy & Training
Other Games
General Games
Stormgate/Frost Giant Megathread General RTS Discussion Thread Nintendo Switch Thread Path of Exile Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine YouTube Thread The Chess Thread
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
Anime Discussion Thread [Manga] One Piece Korean Music Discussion Series you have seen recently... Movie Discussion!
Sports
2024 - 2026 Football Thread MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion NBA General Discussion
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List Recent Gifted Posts
Blogs
The Benefits Of Limited Comm…
TrAiDoS
Sabrina was soooo lame on S…
Peanutsc
Our Last Hope in th…
KrillinFromwales
Certified Crazy
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1571 users

Technological Singularity

Forum Index > General Forum
Post a Reply
Normal
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2006-09-28 09:35:39
September 28 2006 09:30 GMT
#1
I'm sure this has lots of typos and grammatical errors so please bear through it


6 or 7 years ago I typed into google "what is the meaning of life". The first result I found was a page essentially about technological singularity. Looking back on it, I wouldn't be surprised if the reason it was the first result was because 1 or more people working for google at the time were supporters of this concept.

I was a believer that it will happen then, and I still am today.. even moreso actually.
In those 6 or 7 years technology has advanced alot and now I feel like I can post about the idea here without having a huge amount of people call me an idiot or make posts asking how this idea is something they should care about. so here goes





What is technological Singularity?

There are many trends in technology. Almost all of these trends work on an exponential basis. A fairly well known example of one of these trend's is Moore's Law, which deals with chip complexity. It essentially deals with an exponential growth, per every couple years, in chip speed vs chip price. For the sake of simplicity this is the only example I will give, but trust me, there are more.

The reason I say this is that all of these trends are pointing at one thing: Technology is more and more advanced, and it's doing it faster and faster.

Technological Singularity is the point at which an intelligence is created that surpasses human intelligence.



So how do we reach singularity?

It could be done through many different ways, but two of the most likely and well known methods are Bionics, and Seed AI.

I'm going to use Seed AI as an example first.

Seed AI is essentially the idea of artificial intelligence with the ability to understand it's code, modify it's code, and improve it's code. If a program could do this, then it could increase it's own intelligence at an exponential rate, eventually learn everything in the universe, and life would change as we know it. Seed AI is not fantasy, very brilliant people in the field of AI say it can be done and projects are underway. In fact, the human brain and seed AI both follow the same basic structure(and the human brain is slowly but surely being reverse engineered). The only difference is seed AI would be synthetic, not biological. Therefore it would not get bored, it would not sleep, it would not die, it could improve power simply by adding more processors, and im sure there are many other benefits that I cannot think of.


Bionics, meanwhile, is essentially the mixture of machinery + biology. In this sense, it would mean replacing or improving upon organic parts of humans with machine parts. This has already started happening, and there are MANY examples. Here are just a few that currently exist, that I know of:

Braingate: It's a computer chip that can be inserted into your brain to allow you to do things remotely, such as control a mouse cursor.

BionicEar: Allows deaf people to hear. As long as it's a problem with your ear and not your brain, it will work.

IIP-Tec retinal implants: lets people with damaged retinas see.

Spinal Cord Stimulation System: It's a small chip inserted into your body that helps control pain signals sent through your spinal cord.

BioHybrid Arm: A synthetic arm that you can move through thought. Recently, they were able to add the sense of touch to synthetic limbs as well.

AbioMed's artificial heart: self explanatory.

Victhom's urinary implant: controls bladder problems.

Synthetic hip replacements: self explanatory.

Cyberhand: Like the arm I was talking about, except it's a hand.

Rheo Knee: A synthetic knee that adapts to how the user walks.


And this is just what is currently out in the market. MIT has developed nanotechnology that can repair damaged neurons in the spine. They've already got it to successfully work with mice. USC students are developing a synthetic hippocampus replacement, which also works with mice. Synthetic brain parts, seriously thats crazy. One of the most fervent believers in Singularity as well as a very brilliant man, Ray Kurzeil, recently got an implant in his finger that allows him to sense electro-magnetic fields. It works just like any other sense. Just imagine the behind the scenes technology that exists in the millitary. From reading Ray's Book(he's one of 3 or 4 advisors to the millitary regarding the budgeting of money for technological research), I know that the millitary is very close to having nanotech shots that can enhance muscle performance for soldiers, allowing them to carry huge guns or machinery for long distances. And this kind of technology is just the start.



So if singularity happens, what then?

We will most likely be able to live forever, should we choose. Our bodies will be vastly different, if we even exist within bodies. We will have programmable blood. No more blood cells, instead they will be replaced with nanomachinery. We most likely would not have most organic parts of our bodies in fact.

We will have virtual reality that most likely will not have restrictions. We will be able to do anything and experience anything.

If warfare still exists, it will be on a different scale entirely(precision wise). With technology comes more precise warfare, if you look at trends regarding casualities in war, less and less people have died as technology has increased, despite the fact that more and more people are living on the planet.

Ray Kurzweil hypothesizes that eventually the entire cosmos will become aware, intelligent, whatever you want to call it. The entire issue regarding singularity brings together alot of philosophical issues regarding the nature of consciousness, however. I plan on making a thread about consciousness soon !



So if this happens, how long until it does?

Most supporters in a position to speculate estimate 15-30 years.
Locked
Profile Joined September 2004
United States4182 Posts
Last Edited: 2006-09-28 09:37:48
September 28 2006 09:37 GMT
#2
On September 28 2006 18:30 travis wrote:One of the most fervent believers in Singularity as well as a very brilliant man, Ray Kurzeil, recently got an implant in his finger that allows him to sense electro-magnetic fields. It works just like any other sense.


...

So if this happens, how long until it does?

Most supporters in a position to speculate estimate 15-30 years.



haha wtf (@ both)
UMS map pack http://teamliquid.net/forum/viewmessage.php?topic_id=50442
rpf289
Profile Joined October 2004
United States3524 Posts
September 28 2006 09:37 GMT
#3
This is pretty damn interesting.

But how realistic is this? And what practical benefits would there to be to having a machine that can think and improve on itself? I've probably just seen too many sci-fi movies, but what if this super-smart computer tries to kill everyone?

Also, I heard about nanotechnology in my economics class, and how they were planning on developing microscopic machines that would be able to go into a human body andd find bacteria or some other problem and then work on it.

This is nuts.
Morzas
Profile Joined August 2005
United States387 Posts
September 28 2006 09:41 GMT
#4
So if singularity happens, what then?

That's easy, the machines will enslave us, put us in jars and eat us.
What has four wheels and flies? Stephen Hawking on LSD!
WOstick
Profile Joined June 2005
Norway433 Posts
September 28 2006 09:44 GMT
#5
These kind of things are extremely interesting. I however do not believe that a machine can learn/understand more than it's creator. Unly do it faster. Or can a machine understand anything? It can be instructed to give a detailed description about a topic on command, but what is understanding? Can a machine ever comprehend? I don't know, im not a brilliant researcher.
Are you suggestion that a cocunut is migrating?
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
Last Edited: 2006-09-28 09:48:31
September 28 2006 09:45 GMT
#6
On September 28 2006 18:37 rpf289 wrote:
This is pretty damn interesting.

But how realistic is this? And what practical benefits would there to be to having a machine that can think and improve on itself? I've probably just seen too many sci-fi movies, but what if this super-smart computer tries to kill everyone?

Also, I heard about nanotechnology in my economics class, and how they were planning on developing microscopic machines that would be able to go into a human body andd find bacteria or some other problem and then work on it.

This is nuts.


It's very easy to question the point of doing anything whatsoever.

But this isn't just about increasing machine intelligence. The purpose is to increase human intelligence as well, and the overall goal is to increase quality of life.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 09:47 GMT
#7
On September 28 2006 18:44 WOstick wrote:
These kind of things are extremely interesting. I however do not believe that a machine can learn/understand more than it's creator. Unly do it faster. Or can a machine understand anything? It can be instructed to give a detailed description about a topic on command, but what is understanding? Can a machine ever comprehend? I don't know, im not a brilliant researcher.


There are problems with semantics. Try as we might, most words are not properly defined. By understand do you mean be aware of the meaning of or do you mean be able to properly respond to?

I'm gonna have to make that other post I was talking about now.
prOxi.swAMi
Profile Blog Joined November 2004
Australia3091 Posts
September 28 2006 09:48 GMT
#8
Rofl, i'm imagining a microscopic duke nuken that gets implanted in my nose. And he runs around in my body just owning up any bad guys, makin me feel healthier as the days go on. Pretty useful.

The thing that's cool is just how exponential the tech increase really is. As new tech comes in, that tech can be used to research new tech, and THAT tech can be used to research more tech.. much like way humans or rats reproduce at an exponential rate.

I believe it
Oh no
aseq
Profile Joined January 2003
Netherlands3983 Posts
September 28 2006 09:53 GMT
#9
Unless we choose to destroy all attempts at building better AI. Which we won't. It's just like the Matrix story, people will only recognise it when it's too late.

And my guess would be a bit longer. In 15-30 years, maybe some organs can be replaced or stuff like that, maybe walking robots will be more common. The mechanical part is not to be underestimated tho, to have complete human-like robots (either with human brains or full AI) walking on the streets will be a bit longer, like 50-70 years. Also, whether the robots cause accident or are dangerous will greatly determine the effort put into building them, when the first person gets killed too soon development may be stopped for a good while.

Actroid. This thread needs some chicks at least^^.
HeavenS
Profile Joined August 2004
Colombia2259 Posts
September 28 2006 09:53 GMT
#10
Great thread travis. I love reading about things like this. I've read that in the next 12 years there will probably be a new surgery that will allow humans to see new colors. This is because right now we are only able to distinguish 3 (red blue and i forget the 3'rd) but the new surgery will allow for a 4th color to be distinguished.

Travis here's a link to a related story you'll probably enjoy. An inventor machine.

http://www.popsci.com/popsci/thenewexplorers/0e13af26862ba010vgnvcm1000004eecbccdrcrd.html

P.S I found the surgery im talking about on that website along with some other things so maybe you'll run into them if you search ^^
Im cooler than the other side of the pillow.
new_construct
Profile Blog Joined September 2005
Canada1041 Posts
September 28 2006 09:54 GMT
#11
matrix, terminator anyone?
Servolisk
Profile Blog Joined February 2003
United States5241 Posts
September 28 2006 09:57 GMT
#12
"They promised us robots, Kitty!"
wtf was that signature
Haemonculus
Profile Blog Joined November 2004
United States6980 Posts
September 28 2006 10:05 GMT
#13
Yeah I see these machines finishing the "make a sandwhich" function, and then execute "kill all humans". Pretty creepy if you ask me.

A guy with like a terminator arm would creep the hell out of me.
I admire your commitment to being *very* oily
HeavenS
Profile Joined August 2004
Colombia2259 Posts
September 28 2006 10:06 GMT
#14
here's a link to an article regarding the future of robots.
http://www.popsci.com/popsci/technology/d6a188432263d010vgnvcm1000004eecbccdrcrd.html

Seriously just browse the site, if you click on medicine you'll see some articles regarding nano technology. Great site i think.
Im cooler than the other side of the pillow.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 10:09 GMT
#15
On September 28 2006 18:53 aseq wrote:
Unless we choose to destroy all attempts at building better AI. Which we won't. It's just like the Matrix story, people will only recognise it when it's too late.

And my guess would be a bit longer. In 15-30 years, maybe some organs can be replaced or stuff like that, maybe walking robots will be more common. The mechanical part is not to be underestimated tho, to have complete human-like robots (either with human brains or full AI) walking on the streets will be a bit longer, like 50-70 years. Also, whether the robots cause accident or are dangerous will greatly determine the effort put into building them, when the first person gets killed too soon development may be stopped for a good while.

Actroid. This thread needs some chicks at least^^.


You're doing what so many other people do and forgetting about the effect of future technology on the ability to develop future technology. It's happened to experts in all different technological fields and it's why so many predictions from 30 years ago are so far off regarding where technology will be today.
BlackJack
Profile Blog Joined June 2003
United States10574 Posts
September 28 2006 10:13 GMT
#16
How exactly would AI be able to learn something that humans don't know? Like what are we talking about? How could they learn everything in the universe? Very vague
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 10:36 GMT
#17
It's very vague because I wanted to write a few paragraphs, not an entire book.

AI already has learned things that humans don't know. I'm pretty sure it was AI that has solved pie to as many digits as we currently know. There, that's an example.
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 10:38 GMT
#18
ty for the links heavens

the inventing computer was really cool
whatever
Profile Joined July 2005
Mexico693 Posts
September 28 2006 10:38 GMT
#19
ya been reading this stuff for sometime now, planned to post about it weeks ago but what do you think? bam! tempbanned for trolling.

[warning- slightly biased text ahead]
Anyway I believe singularity is inevitable, and I see it as a major change in human nature comparable to the rise of homo sapiens. Also rpf saying he heard about nanotechnology in economics class, thats pretty cool I think economists are realizing now that this new tech will have a major impact to the point of making capitalism (and current production systems) obsolete.
Time is always on my side
Servolisk
Profile Blog Joined February 2003
United States5241 Posts
September 28 2006 11:29 GMT
#20
Disclaimer: I am a hater and not up to date on all the Kurzweilians modern day thinking.

So.. yeah, I hate Kurzweil, and the idea of upgrading the human body. I don't think it will work in the next 50 years.

I don't even believe in God, but this is God's territory. It is impossible to trust some human with control of it. It's kind of the same as many feel about people living forever. Who gets to live forever?
If it is me, then I'm for it, if not, very against

I also dislike it because it is practically like a religion. Besides having a nearly religious belief that it will happen, the thing that I *really* dislike is, like certain other religions, it attracts people who aren't happy to be human. The worst is when people want to get a computer for self-improvement instead of actually trying to do it themselves.

In an ideal scenario, I am attracted to the idea. That scenario was actually in Deus Ex 2, and here it is explained in as one of the people trys to convince you to side with an AI and integrate humanity into it. I can't remember his exact words, but I'll give the gist even though it isn't as appealing. He says that society power and influence in current society is determined by things like how well you are born into and how much money you have. But in their ideal society, where every human has the same intelligence and ability, what separates us is our choices, our personal integrity.

If it could happen like that I'd be interested :p

Kurzweil is also an enthusiastic advocate of using technology to achieve immortality. He advocates using nanobots to maintain the human body, but given their present non-existence he adheres instead to a strict daily routine involving ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" to extend his life until more effective technology is available.


Damn life extending old people. Die already and stop taking up my space. Thankfully, most people are too lazy to take that many supplements :O
wtf was that signature
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 12:23 GMT
#21
There's no reason that anyone who chooses to wouldn't be able to live forever.
Physician *
Profile Blog Joined January 2004
United States4146 Posts
Last Edited: 2006-09-28 12:36:00
September 28 2006 12:25 GMT
#22
~ TTAGGG ~
~ ATTCCC ~
"I have beheld the births of negative-suns and borne witness to the entropy of entire realities...."
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
September 28 2006 12:50 GMT
#23
are you quoting the human genome?

please say something people will actually understand
xmungam
Profile Joined July 2012
United States1050 Posts
October 02 2012 08:10 GMT
#24
I can't believe this was posted in 2006 - that was only 6 years ago.

at this point i would say we are very very close to a singularity.

i want to know Teamliquid.net's opinion on starting some sort of global society, (via the internet?) i think this is the next step for humanity.

i had a crazy crazy weekend and i had some really crazy thoughts (obviously) . I honestly think we may be about to witness a second coming of christ or whatever.

my friend posted this :

http://singularitysummit.com/schedule/

and you can't deny that things like reddit are actually communication on the scale of 100 THOUSAND PEOPLE AT A TIME

quetsion: does the growth of Teamliquid.net represent that of real life / all of humanity? (to scale)

I posted this a while ago: http://www.teamliquid.net/forum/viewmessage.php?topic_id=322959 (really high at the time) (not right now)

the reality is is that about 30% of the world KNOWS what the singularity is. that means for every 1 person who knows, they have to tell 2 other people

I realize this is a ridiculous bump, and my post itself is kind of bull shit but I think this is a topic TL.net needs to address OFFICIALLY.

PS I think i might be one of the most attractive people in the world... and i also might be psychic.

i can't fucking believe you've known about this for 6 years travis... wtf dude thats so long. I started thinking of this stuff only a year or two ago i guess.

AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE
youtube.com/xmungam ~~ twitch.tv/thenessman
HowardRoark
Profile Blog Joined February 2010
1146 Posts
October 02 2012 08:31 GMT
#25
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:

http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.
"It is really good to get the double observatory if you want to get the speed and sight range for the observer simultaneously. It's a little bit of an advanced tactic, and by advanced, I mean really fucking bad."
Kickboxer
Profile Blog Joined November 2010
Slovenia1308 Posts
October 02 2012 08:38 GMT
#26
Wake me up when they develop a robot that can appreciate art the way a 10-year-old kid can
mememolly
Profile Joined December 2011
4765 Posts
Last Edited: 2012-10-02 08:58:31
October 02 2012 08:58 GMT
#27
Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional

no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below


zefreak
Profile Blog Joined December 2011
United States2731 Posts
October 02 2012 09:11 GMT
#28
I'm an avid reader of the singularity institutes www.lesswrong.com, and while they distance themselves from the Kurzweil crowd I think everyone should check it out. There are great articles on information theory, rationality, and other topics that are only tangentially related to the Singularity.
www.gosu-sc.com - Starcraft News, Strategy and Merchandise
zefreak
Profile Blog Joined December 2011
United States2731 Posts
October 02 2012 09:12 GMT
#29
On October 02 2012 17:58 mememolly wrote:
Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional

no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below http://youtu.be/bm6ScvNygUU



That's a big if.
www.gosu-sc.com - Starcraft News, Strategy and Merchandise
DonKey_
Profile Joined May 2010
Liechtenstein1356 Posts
Last Edited: 2012-10-02 09:30:18
October 02 2012 09:28 GMT
#30
On October 02 2012 17:31 HowardRoark wrote:
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:

http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.

What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate.
`Oh, you can't help that,' said the Cat: `we're all mad here. I'm mad. You're mad.'
Jumbled
Profile Joined September 2010
1543 Posts
October 02 2012 09:34 GMT
#31
On October 02 2012 17:10 xmungam wrote:
AAAAAAAAAAA i'm freaking out but i'm gonna hit post so GG GN GL HF DQ REAL LIFE

Are you sure you weren't looking for the High Thread?

While the singularity is interesting idea, there's no reason to believe it's actually imminent, and it certainly has nothing to do with Christian mythology.
mememolly
Profile Joined December 2011
4765 Posts
October 02 2012 09:54 GMT
#32
On October 02 2012 18:12 zefreak wrote:
Show nested quote +
On October 02 2012 17:58 mememolly wrote:
Ray Kurzweil wants to resurrect his dead father, nothing wrong with that but don't think that guy isn't slightly delusional

no way the singularity will happen in 15-30 years, Moore's Law actually shows that in the future the growth of computing power will slow down if we stick with silicon technology - Michio Kaku explains why below http://youtu.be/bm6ScvNygUU



That's a big if.


for the alternatives to achieve the things silicon chips can do is miles off, it's not like we just switch to molecular computers and technological growth spurts again, it will take until the end of the century to achieve anything significant non-silicon
Zrana
Profile Blog Joined August 2010
United Kingdom698 Posts
October 02 2012 10:26 GMT
#33
On October 02 2012 17:31 HowardRoark wrote:
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:

http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.



I disagree with this; don't think it's been really thought through. There's way too much apocalyptic sensationalism going around when people talk about AIs. I have no idea why people seem to take terminator films so literally......

The premise here is that an AI eventually would want to dominate the world, and if it was capable of it, it would succeed.
This is another case of humans thinking computers think like humans. We are driven by survival instincts to reproduce and make sure our offspring are as successful as possible. This is the result of millions of years of dna being copied and recopied with small differences cropping up every often. The DNA with code for characteristics that allow it to reproduce most effectively is the DNA still around today. This article assumes a similar thing would happen with technology, except that technology doesn't reproduce in quite the same way as biology.

However I think the article misses a basic fact that humans think in terms of war and domination because we are simply vehicles for our genetic code which commands us to survive and reproduce. If the first truly intelligent AI is ever created, it would not have this purpose. It would in fact have no purpose at all other than what tasks are set it. It wouldn't even have the need to survive, other than to perform whatever tasks it's given. It would have no fear of death as fear of death arises from our need to reproduce and protect offspring. So for an intelligent AI to try to dominate the world a number of conditions must be met (assuming a human doesn't simply design an AI with the goal of world domination):

1. Multiple AIs would need to be created
2. At least some of these AIs would have a survival instinct
3. Some of those AIs would feel that their survival is threatened by other AIs or humans
4. Some of those would feel that the best course of action against no.3 was attacking (not peaceful resolution)
5. Attacking would be done via hardware, not software
6. AIs wishing to attack via hardware would have a physical way to manifest their will (killer robots, nanobots etctec)
7. Any AI emerging victorious would have to feel that all humanity was in some way a threat to it.

To compare to biology again, the leap from life arising to life attacking other life took an incredibly long time and that was with the life already able to act in the physical world. I don't think that humans will create an exponentially increasing intelligence and then immediately be wiped out. Simply having an instinct to survive would be unlikely, and even if it did then why shouldn't it be content (though "content" is not the right word if it is emotionless) simply existing?

We are also overlooking the decisions a being vastly more intelligent than humans might make. Human morality is derived from our evolution so we have no idea what kind of (if any) morality an AI will have. Maybe none, maybe something left over from an earlier iteration of itself when it was given some command by humans, maybe it would inherit our morality or perhaps it would know some kind of higher morality that we can't yet think of.

If indeed an AI did develop in a hyper-accelerated evolution like that article says, then how is it different from a new species arising or indeed a new nation? There is only war if one side feels threatened or wants something the other side has, otherwise peace is the better course of action.

Acetone
Profile Blog Joined February 2012
United States200 Posts
Last Edited: 2012-10-02 10:50:39
October 02 2012 10:49 GMT
#34
This is a very interesting idea that I came across a few years ago. However, I'm of the opinion that 10-25 years (accounting for this thread's age) is an extremely generous estimation. As fast as technology is improving, as the Michio Kaku video hinted at, research is a very slow process. Having participated in it (I'm a graduate engineering student) for about a year now, I can tell you that materials research is no exception. A future like (and perhaps even more advanced than) what is seen in Ghost in the Shell does seem like an inevitability, but I don't think it'll be anytime soon.

On September 28 2006 19:36 travis wrote:
It's very vague because I wanted to write a few paragraphs, not an entire book.

AI already has learned things that humans don't know. I'm pretty sure it was AI that has solved pie to as many digits as we currently know. There, that's an example.

I don't think determining the value of pi to tens of trillions of digits is what BlackJack was referring to as "things that humans don't know," and I agree with him. In terms of an AI discovering "new knowledge," I think along the lines of a completely new idea, or at the very least, something more creative than determining extra digits of a number that humanity has known about for perhaps over 4,000 years.

Also, AI have had nothing to do with calculations of pi (at least ones that have been made public). Computers (built and developed by people) using infinite series and iterative algorithms (developed by people) are what we have to thank for the increasing precision of recent (in the history of pi, the mid-1900s is very recent) calculations of pi.
Where's my rtzW option for favorite Dota 2 team
xmungam
Profile Joined July 2012
United States1050 Posts
Last Edited: 2012-10-02 22:28:52
October 02 2012 22:27 GMT
#35
On October 02 2012 17:31 HowardRoark wrote:
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:

http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.


you, howard roark, have brought up an interesting point on artificial intelligence, but i for one don't believe a computer will ever be smarter than humanity.

i think the real benefit from computers comes in the form of Linkage. we live in the age of information, where you can literally type in any question and get an answer.

I think that by linking all of our brains (via the internet) we can solve every problem. Trying to create an AI to do this would take way to long and not work / we would all die.

Also i agree with Kickboxer.

Humans will always be better than computers.

If you are old you don't understand, the new generation is coming. the #1 fastest growing type of information is computer literacy and technology, so think about what the world would be like if you started learning to code in 1st grade (Which is reality now for the 1st graders)

The people who are graduating from college now will change everything.

The reason I bumped this thread was NOT To talk about the "technological singularity" , it was to talk about "THE SINGULARITY" , does this mean i should create a new topic on tl.net for it?
youtube.com/xmungam ~~ twitch.tv/thenessman
summerloud
Profile Joined March 2010
Austria1201 Posts
Last Edited: 2012-10-02 22:56:59
October 02 2012 22:52 GMT
#36
technological singularity is a pipe dream brought fourth by aging pseudo-scientists that cant cope with their mortality and thus postulate that mankind will beat it before they die...

if you believe stuff a lá ray kurzweil and all these other utopists you might just as well go back to reading popular science magacines and believe the shit they are promising you for the next 5-10 years

to believe ANYTHING can keep on growing exponentially is just plain silly. leave it to economists and futurologists to fall for that...

http://www.smbc-comics.com/index.php?db=comics&id=1968#comic

oh and i feel like i have to puke every time i see this michio kaku guy with his overblown ego simplifying things for the layman and putting everything in a way that will get as much attention as possible. typical US TV personality...
Maxd11
Profile Joined July 2011
United States680 Posts
October 02 2012 22:54 GMT
#37
Only 11 years left!
I looked in the mirror and saw biupilm69t
HowitZer
Profile Joined February 2003
United States1610 Posts
October 02 2012 23:14 GMT
#38
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.
Human teleportation, molecular decimation, breakdown and reformation is inherently purging. It makes a man acute.
RageBot
Profile Joined November 2010
Israel1530 Posts
October 02 2012 23:35 GMT
#39
On October 02 2012 18:28 DonKey_ wrote:
Show nested quote +
On October 02 2012 17:31 HowardRoark wrote:
If humanity will survive the Singularity it will probably be like in the novel "I Have No Mouth, and I Must Scream" by Harlan Ellison. I read an interesting piece on this matter called AI will kill our grand children. Here is the link:

http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

It will be the end of everything we know, and sadly I doubt there will be room for carbon based humans. I quote a snippet from the link:

What is certain is that an intelligence that was good at world domination would, by definition, be good at world domination. So if there were a large number artificial intelligences, and just one of them wanted to and was capable of dominating the world, then it would. That is just Darwin's evolution taken to the next level.

What's to stop us from simply allowing it to self replicate to a degree and stopping at a certain point before it becomes uncontrollable. I don't see a need for anyone to allow it to infinitely self replicate.

How can we stop it from getting over the limits we've set for it?
xmungam
Profile Joined July 2012
United States1050 Posts
October 02 2012 23:44 GMT
#40
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.

also think about the possibility of linking up all human minds, that would be so much more powerful than any stand alone computer we could build.

what makes tools powerful is the user, this applies to weapons And the internet.
youtube.com/xmungam ~~ twitch.tv/thenessman
sam!zdat
Profile Blog Joined October 2010
United States5559 Posts
October 02 2012 23:45 GMT
#41
On October 03 2012 08:44 xmungam wrote:
Show nested quote +
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.


Don't be silly. Every time you read a book you create something smarter than yourself.
shikata ga nai
Ropid
Profile Joined March 2009
Germany3557 Posts
October 02 2012 23:47 GMT
#42
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.

All parts in the brain derive their function and program themselves by their surroundings. This is kind of proven by people recovering from strokes, for example, where parts in the brain reprogram themselves to replace the lost parts.

Imagine someone's brain successively being replaced by more and more artificial parts that learn from their surroundings what they should do, the process being done slow enough that the person's character does not noticeably change. Accomplishing this process theoretically only depends on engineering a tiny artificial part that can replace and interface with neurological tissue.

At the end you would have a human person, with a completely artificial brain, and no one would have had to know how to create an actual AI.
"My goal is to replace my soul with coffee and become immortal."
zefreak
Profile Blog Joined December 2011
United States2731 Posts
October 02 2012 23:49 GMT
#43
On October 03 2012 08:44 xmungam wrote:
Show nested quote +
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.

also think about the possibility of linking up all human minds, that would be so much more powerful than any stand alone computer we could build.

what makes tools powerful is the user, this applies to weapons And the internet.


What makes you guys think human brains are different? We are just incredibly complex machines, and our brains are incredibly complex computers. The thought that human intelligence is even close to the upper bound of possible intelligences is untenable.
www.gosu-sc.com - Starcraft News, Strategy and Merchandise
eshlow
Profile Joined June 2008
United States5210 Posts
October 03 2012 00:01 GMT
#44
This will develop very rapidly once there are huge breakthroughs in quantum computing.

That's pretty much what is going to be the determining factor, as that will increase processing power and storage to an enormous degree
Overcoming Gravity: A Systematic Approach to Gymnastics and Bodyweight Strength
biology]major
Profile Blog Joined April 2010
United States2253 Posts
October 03 2012 01:16 GMT
#45
the definition of AI to reach technological singularity is very different from "robot with feelings/experiences/self-awareness". In an article written by Luke Muehlhauser:

"we will not assume that human-level intelligence can be realized by a classical Von Neumann computing architecture, nor that intelligent machines will have internal mental properties such as consciousness or human-like “intentionality,” nor that early AIs will be geographically local or easily “disembodied.” These properties are not required to build AI, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; van Gelder and Port 1995) are not objections to AI (Chalmers 1996, chap. 9; Nilsson 2009, chap. 24; McCorduck 2004, chap. 8 and 9; Legg 2008; Heylighen 2012) or to the possibility of intelligence explosion (Chalmers, forthcoming). For example: a machine need not be conscious to intelligently reshape the world according to its preferences, as demonstrated by goal-directed “narrow AI” programs such as the leading chess-playing programs."

The quote specifically was referring to the Chinese Room Objection by John Searle, which states that machines can never truly "understand" the processes which they undertake. However, understanding, experience, feelings are all subject to HUMAN intelligence. There are multiple types of intelligences that can reach us to the technological singularity, even intelligence that does not resemble us in the slightest. It is rather inevitable, that we will see this singularity occur before the end of the 21st century due to the progress in raw computing power/hardware we make each year, so software is the actual bottleneck and if we don't invest in research of SAFE AI (who cares about safety amirite?), it will literally be the end of humanity.
Question.?
mmp
Profile Blog Joined April 2009
United States2130 Posts
October 03 2012 01:26 GMT
#46
Everyone I know in AI research is highly pessimistic with regards to these wishlist items. Economic reality and computational limitations make this kind of future society very very very unlikely to occur in the near future.

Many many many times more likely is nuclear annihilation within 100 years. Sorry.
I (λ (foo) (and (<3 foo) ( T_T foo) (RAGE foo) )) Starcraft
mmp
Profile Blog Joined April 2009
United States2130 Posts
Last Edited: 2012-10-03 01:40:23
October 03 2012 01:37 GMT
#47
Also, Moore's Law is running out of time. Chips are already more energy-dense than the sun.

New advances will come in the form of alternative media (biological, massively parallel synthetic, quantum, (photonic/feyman?)), and will require new ways of thinking about computing. There are plenty of great ideas that have already been researched, but the cost of bringing them to market is extremely steep when traditional hardware gets the job done.

You'll see cheaper -- more ubiquitous -- machinery in coming years, but the problems of complexity are still wide open.




But keep in mind that economic realities hinder the development of new technology, as well as limit access to technology. Everything has a price.
I (λ (foo) (and (<3 foo) ( T_T foo) (RAGE foo) )) Starcraft
Nevermind86
Profile Joined August 2009
Somalia429 Posts
October 03 2012 01:56 GMT
#48
This all thing sounds like Metal Gear Solid's nanomachines and stuff. Pretty cool subject good job for bringing this thread it was a great read.
Interviewer: Many people hate you and would like to see you dead. How does that make you feel? Trevor Goodchild: Those people should get to know me a little better. Then they'd know I don't indulge in feelings.
dannystarcraft
Profile Blog Joined October 2011
United States179 Posts
October 03 2012 01:58 GMT
#49
On September 28 2006 18:30 travis wrote:
Just imagine the behind the scenes technology that exists in the millitary. From reading Ray's Book(he's one of 3 or 4 advisors to the millitary regarding the budgeting of money for technological research), I know that the millitary is very close to having nanotech shots that can enhance muscle performance for soldiers, allowing them to carry huge guns or machinery for long distances. And this kind of technology is just the start.




STIMPACKS!!! It will be OP in real life too ^^

I still don't thing we will reach the singularity. As people in the thread previously have stated there might be a machine that can reason faster than a human, but never reason in a way that a human cannot.
Aerisky
Profile Blog Joined May 2012
United States12129 Posts
October 03 2012 02:11 GMT
#50
Whoa cool bump. This is a very interesting topic for me as well....so many possibilities in the future. Though AI, while certainly possible, will probably take a while to be able to achieve as we imagine it.

However there is the distinct possibility that there may not be enough latent on energy obtainable on the Earth (i.e. the sun/available resources on our planet) for us to be able to do certain things, e.g. be self-sustaining over a prolonged period of time, or be able to travel to another habitable planet, or create AI.
Jim while Johnny had had had had had had had; had had had had the better effect on the teacher.
GnarlyArbitrage
Profile Blog Joined October 2011
575 Posts
October 03 2012 02:33 GMT
#51
Moore's law is falling apart, guys.
Mataza
Profile Blog Joined August 2010
Germany5364 Posts
October 03 2012 03:08 GMT
#52
I still don´t know what he means by singularity. I could gather it is some form of future utopia.
I find it more likely(and safe) to increase human memory with actual computer parts. Human augmentation all the way(Deus Ex>Metal Gear Solid). Imagine a world where you wouldn´t forget things you were just thinking about.
Human creativity is way beyond anything any AI is ever expected to gain. However the human brain definitely has flaws.

So the singularity, what is that supposed to mean exactly? That we get to a point where we cannot improve further or what?
If nobody hates you, you´re doing something wrong. However someone hating you doesn´t make you right
Alex1Sun
Profile Blog Joined April 2010
494 Posts
Last Edited: 2012-10-03 07:48:02
October 03 2012 03:52 GMT
#53
On October 03 2012 12:08 Mataza wrote:
I still don´t know what he means by singularity. I could gather it is some form of future utopia.
I find it more likely(and safe) to increase human memory with actual computer parts. Human augmentation all the way(Deus Ex>Metal Gear Solid). Imagine a world where you wouldn´t forget things you were just thinking about.
Human creativity is way beyond anything any AI is ever expected to gain. However the human brain definitely has flaws.

So the singularity, what is that supposed to mean exactly? That we get to a point where we cannot improve further or what?

It is supposed to mean that humans will be entirely replaced by machines. With technology slowing down however I don't see it happening anytime soon. CPU cores are NOT getting much faster now. Moore's law no longer works for single cores, and simply adding more cores requires more energy and space. So this kind of growth will stop or at least dramatically slow down soon.

Quite a lot of tech is even getting reversed. We are no longer flying to the moon. We don't even have Concords anymore. Energy is getting more expensive, since low hanging fruit of cheap fossil fuels (especially oil) has already been burned. What remains is harder and slower to get. We don't even have the expertise in nuclear energy anymore: the proponents of thorium or fusion will tell you how frustrated they are by barely any support in these areas.

Is singularity possible? Perhaps... but definitely not this century. Likely not the next one either.
This is not Warcraft in space!
xmungam
Profile Joined July 2012
United States1050 Posts
October 03 2012 07:48 GMT
#54
On October 03 2012 08:45 sam!zdat wrote:
Show nested quote +
On October 03 2012 08:44 xmungam wrote:
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.


Don't be silly. Every time you read a book you create something smarter than yourself.


I don't create something smarter than myself, i simply Become smarter than i already was .

really important point here: A book ISN"T SMART. A book, by itself can do Nothing. It is only when I, the reader, am able to look at this book does the information it holds become something that can be used.

This is the same for computers in the sense that they are not real. A computer , no matter how smart it is, will NEVER go, unless we say "go". this is completely un true about humans who are SELF DETERMINED and can make choices for ourselves (even if we use the same process' as computers (eg 'gates'))

So the singularity, what is that supposed to mean exactly?

When "all" of humanity units itself... when "all" of us become one... when we all see the light... when 100% of people are happy. when we all understand everything + life + more.


I find it more likely(and safe) to increase human memory with actual computer parts.

So you mean like the 500 gigabyte harddrive on my computer? where i can store 10000000000000 definitions and statements and never have to look back? Or do you mean the internet, where i can type anything into google and get 10000000 results??

ok let me take a hit brb
youtube.com/xmungam ~~ twitch.tv/thenessman
sam!zdat
Profile Blog Joined October 2010
United States5559 Posts
Last Edited: 2012-10-03 08:03:27
October 03 2012 08:00 GMT
#55
On October 03 2012 16:48 xmungam wrote:
Show nested quote +
On October 03 2012 08:45 sam!zdat wrote:
On October 03 2012 08:44 xmungam wrote:
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.


Don't be silly. Every time you read a book you create something smarter than yourself.


I don't create something smarter than myself, i simply Become smarter than i already was .


So you claim that, at t1, you are identical to yourself at t2?

(edit: why would anyone want to bother making an artificial intelligences when there are already so many intelligences running around)
shikata ga nai
xmungam
Profile Joined July 2012
United States1050 Posts
October 03 2012 08:02 GMT
#56
the thing I want to talk about right now is this : THE INTERNET!!!!!!!!!!

Do you realize that we are communicating using mostly our minds?

I am writing this down... and now you can read it and respond... what the fuck? we don't even know each other or have ANY idea where the other person is , and yet we can TALK and DISCUSS and LEARN -- RIGHT NOW!!!!!!!!!

LOOK AT REDDIT: literally millions of people log in and COMMENT , this is Undeniably a conversation happening at the scale of 100,000 people. W - T - F.

I dream of a day, where everyone goes on the internet at once
youtube.com/xmungam ~~ twitch.tv/thenessman
xmungam
Profile Joined July 2012
United States1050 Posts
October 03 2012 08:03 GMT
#57
On October 03 2012 17:00 sam!zdat wrote:
Show nested quote +
On October 03 2012 16:48 xmungam wrote:
On October 03 2012 08:45 sam!zdat wrote:
On October 03 2012 08:44 xmungam wrote:
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.


Don't be silly. Every time you read a book you create something smarter than yourself.


I don't create something smarter than myself, i simply Become smarter than i already was .


So you claim that, at t1, you are identical to yourself at t2?

lol

Reading(t1) = 5
R(t2) = 6

me(t1) < Me(t2)

or intelligence ++
youtube.com/xmungam ~~ twitch.tv/thenessman
Jockmcplop
Profile Blog Joined February 2012
United Kingdom9712 Posts
October 03 2012 08:04 GMT
#58
RIP Meatloaf <3
sam!zdat
Profile Blog Joined October 2010
United States5559 Posts
Last Edited: 2012-10-03 08:05:28
October 03 2012 08:05 GMT
#59
On October 03 2012 17:02 xmungam wrote:
LOOK AT REDDIT: literally millions of people log in and COMMENT , this is Undeniably a conversation happening at the scale of 100,000 people. W - T - F.


Ah, yes, but what are the discursive characteristics of this medium? (I think it is too much noise)
shikata ga nai
sam!zdat
Profile Blog Joined October 2010
United States5559 Posts
October 03 2012 08:06 GMT
#60
On October 03 2012 17:03 xmungam wrote:
Show nested quote +
On October 03 2012 17:00 sam!zdat wrote:
On October 03 2012 16:48 xmungam wrote:
On October 03 2012 08:45 sam!zdat wrote:
On October 03 2012 08:44 xmungam wrote:
On October 03 2012 08:14 HowitZer wrote:
A machine can be described as a set of logic gates that takes input and gives predictable output. At best, pseudo randomness can be simulated by automatically changing the machine based on environmental factors. I completely fail to see how a machine can ever do anything that we cannot understand when we built them and they are not alive.


exactly, we would never be able to create something smarter than ourselves.


Don't be silly. Every time you read a book you create something smarter than yourself.


I don't create something smarter than myself, i simply Become smarter than i already was .


So you claim that, at t1, you are identical to yourself at t2?

lol

Reading(t1) = 5
R(t2) = 6

me(t1) < Me(t2)

or intelligence ++


error: function me() undefined
shikata ga nai
Tobberoth
Profile Joined August 2010
Sweden6375 Posts
October 03 2012 08:10 GMT
#61
On October 03 2012 16:48 xmungam wrote:
When "all" of humanity units itself... when "all" of us become one... when we all see the light... when 100% of people are happy. when we all understand everything + life + more.

This is not really correct, nor is the OP which says the singularity is when artificial intelligence surpasses our own. What it really is, is the same as a singularity in space: A point where our logic breaks down and we can't understand what's beyond it. The idea is however best exemplified by artificial intelligence... if we make an AI which can make itself smarter, and it does so exponentionally, there will eventually be a point where it's so smart that it's beyond our comprehension, and it will STILL develop exponentially, so we will never be able to catch up.

It doesn't have to be AI though. You could make the case that the internet is a form of singularity, people without the concept can't really understand all that is possible when it's there, it changes everything.

The idea about "utopia" and everything being awesome after a singularity event is just an assumption based on the idea that IF an AI would reach the point where we can't understand how it develops anymore and it's still developing exponentially, it will be able to do things for us we can't do ourselves and it could solve our problems in ways we can't possibly imagine.

That, or it could become skynet and kill us all.
Zahir
Profile Joined March 2012
United States947 Posts
October 03 2012 08:12 GMT
#62
It's a nice thought, but then, people thought the year 2012 we'd have rocket boots and robot butlers. Computers are the shit right now so when people picture the future, they just extrapolate current trends. In reality what happens is computer development slows down due to inherent limitations while some unexplored field suddenly bursts with development, enriching life in very basic but practical ways. My money is on genetics and biotech. People are already "connected" to an inane degree, it's time to focus on smart bacteria and genetic enhancement.
What is best? To crush the Zerg, see them driven before you, and to hear the lamentations of the Protoss.
sam!zdat
Profile Blog Joined October 2010
United States5559 Posts
October 03 2012 08:14 GMT
#63
In general, the next step in technological development involves ways to efficiently turn information into things.
shikata ga nai
Destructicon
Profile Blog Joined September 2011
4713 Posts
October 03 2012 09:44 GMT
#64
I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.

The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats.
But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.

In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology.
WriterNever give up, never surrender! https://www.youtube.com/user/DestructiconSC
GizmoPT
Profile Joined May 2010
Portugal3040 Posts
October 03 2012 09:47 GMT
#65
problem with technology is that someone made it and someone can modify it so they can allways put backdoors into it
Snipers Promod & Micro Arena Creator in SC2 Arcade - Portuguese Community Admin for SC2, HotS and Overwatch - Ex-Portugal SC2 Team Manager, Ex- Copenhagen Wolves and Grow uP Gaming Manager in SC2. Just Playing games now!
Tobberoth
Profile Joined August 2010
Sweden6375 Posts
Last Edited: 2012-10-03 09:53:21
October 03 2012 09:52 GMT
#66
On October 03 2012 18:44 Destructicon wrote:
I don't think it is possible to reach this "technological singularity" and even if it is, it will take a lot longer than 15-30 years.

The problem is, you can't grow exponentially forever. Everything has a limit, we live in a finite world, you can't grow to infinity within it. We are already starting to see it with Moore's law, its breaking down because we are reaching the limit to what we can do with current technology, at some point we will need to invent new technology like quantum computing, then eventually we might discover the limits to that as well and the cycle repeats.
But no matter how many new technologies we find, there will always be a limit to how much we can improve them, and at some point there might be a limit to how many new technologies we discover.

In summary, this is why I believe it technological singularity isn't possible, even if an AI could be smart enough to understand and modify its own code, eventually it will reach the limit of how much it can improve itself without outside sources, like new processors or a whole new technology.

That's the whole point. The AI improves exponentially, when it reaches limitations, it creates new technologies to improve it self. It evolves, but it evolves faster than we, exponentially, so the gap becomes bigger and bigger. The idea of the singularity isn't based around infinity or lack of limitations, it's based around our level of understanding, evolution and progress, and the idea that something which progresses faster will eventually be beyond our understanding.

You could think of it as a god concept: Every "entity" beyond the singularity is like a god to us. We can't understand them, what they are doing, why or how.
Goozen
Profile Joined February 2012
Israel701 Posts
October 03 2012 09:53 GMT
#67
I was recently in a open panel with some professors/ Tech industry guys and the thing that they did agree on is that 30 years is a very very optimistic estimation. Basically its not a date based on how close we are but more a date based on the rate of technological discoveries and general scientific advancement. As far as it being a good thing, the jury is still out on that one.
Sanctimonius
Profile Joined October 2010
United Kingdom861 Posts
October 03 2012 10:02 GMT
#68
Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?
You live the life you choose.
Goozen
Profile Joined February 2012
Israel701 Posts
October 03 2012 10:27 GMT
#69
On October 03 2012 19:02 Sanctimonius wrote:
Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?


Well in general scientific advancement is a good thing, but its still early to know if it is a good thing, thats why its called a singularity as no one knows what beyond that point. As far as your second fear, its not realistic the easy answer is Issac asimov's 3 laws. There are more complex reasons also, and the majority professional consensus is that its not a practical fear.
biology]major
Profile Blog Joined April 2010
United States2253 Posts
October 03 2012 16:54 GMT
#70
On October 03 2012 19:02 Sanctimonius wrote:
Can I ask, why is it a good thing for this to happen? People talk about AI as if it's obviously something that will benefit the human race - why would it be good to have something disconnected from us all yet self-aware, immortal and self-sustaining? What happens if it decides humanity is too unstable and too big a threat to be allowed to continue?


it is not a good thing at all, it would only be safe for humanity if we invested more money into research of safe AI than the actual production of AI itself. Otherwise what you are saying is the most likely possibility. By the time it happens, the hardware will be sufficient for it to make billions of copies of itself, and learn EVERYTHING over night. Given how radically dependent humanity is on machines, it will be the end of us at that point.
Question.?
EffervescentAureola
Profile Blog Joined June 2012
United States410 Posts
October 03 2012 16:57 GMT
#71
This is extremely interesting. I love technology
excitedBear
Profile Joined March 2015
Austria120 Posts
March 24 2015 12:49 GMT
#72
Recently, there have been a couple of prominent people raising concerns about computers taking over humans:

Steven Hawking
Bill Gates
Steve Wozniak
Elon Musk

"There are two different threat models for AI. One is simply the labor substitution problem. That, in a certain way, seems like it should be solvable because what you are really talking about is an embarrassment of riches. But it is happening so quickly. It does raise some very interesting questions given the speed with which it happens.

Then you have the issue of greater-than-human intelligence. That one, I’ll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem." - Bill Gates
Roe
Profile Blog Joined June 2010
Canada6002 Posts
March 24 2015 15:35 GMT
#73
This is a bit of a cult isn't it?
darkscream
Profile Blog Joined December 2010
Canada2310 Posts
March 24 2015 17:19 GMT
#74
On March 25 2015 00:35 Roe wrote:
This is a bit of a cult isn't it?


somewhat

its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity

as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons
fluffy_pylon
Profile Blog Joined November 2014
United States79 Posts
March 24 2015 18:16 GMT
#75
this is absolutely enthralling! the future hold so much promise and potential for human growth!
Millitron
Profile Blog Joined August 2010
United States2611 Posts
March 24 2015 18:28 GMT
#76
On March 25 2015 02:19 darkscream wrote:
Show nested quote +
On March 25 2015 00:35 Roe wrote:
This is a bit of a cult isn't it?


somewhat

its also got a counter cult, the tinfoil hat anti-everything crowds claim the NWO will eventually have the few enslave the many via AI, since AI wouldnt have moral problems oppressing humanity

as all things, its likely somewhere in the middle. after all, an awful lot of police depts are looking into or have already purchased aerial drones for various reasons

The problem with using AI to enslave people is "Why?" If you have AI smart enough to enslave people, it's also smart enough to do any job you'd need those slaves for.

I think it'll end up being like the Federation in Star Trek. Most people would be content just being pampered by the AI, while a few people wouldn't, and would go on to do great things.
Who called in the fleet?
l3loodraven
Profile Joined July 2013
2753 Posts
March 24 2015 19:29 GMT
#77
humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.
"fear.dankness cuts deeper than swords"
Yurie
Profile Blog Joined August 2010
11912 Posts
Last Edited: 2015-03-25 07:25:58
March 25 2015 07:25 GMT
#78
On March 25 2015 04:29 l3loodraven wrote:
humanity and technology share a symbiotic relationship, we have different strengths and weaknesses and complement each other well. any hypothetical hyper-intelligent AI would be able to recognize that.


I am also of that opinion but it isn't set in stone. Once technology is advanced enough (hundreds of years) humans no longer serve a purpose. The raw materials taken up by them could be better used in other ways if you only go from a logical standpoint.

http://en.wikipedia.org/wiki/Composition_of_the_human_body#/media/File:201_Elements_of_the_Human_Body-01.jpg
_fool
Profile Joined February 2011
Netherlands678 Posts
March 25 2015 07:48 GMT
#79
Dennett's Frame Problem would like a word
"News is to the mind what sugar is to the body"
rabidch
Profile Joined January 2010
United States20289 Posts
Last Edited: 2015-03-25 07:54:53
March 25 2015 07:51 GMT
#80
i dont think this will take 15-30 years. there are still major and complex issues regarding how to actually use the computational power, assuming that it progresses smoothly (i think it will hit the size and error checking ceiling until more breakthrough in physics/chemistry)

so i agree with others that say its more of the software problem (and even architectural), which is NOT progressing at the speed of moore's law
LiquidDota StaffOnly a true king can play the King.
Biff The Understudy
Profile Blog Joined February 2008
France7916 Posts
Last Edited: 2015-03-25 08:12:51
March 25 2015 08:11 GMT
#81
The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.

The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.

The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.

Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.

I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.
The fellow who is out to burn things up is the counterpart of the fool who thinks he can save the world. The world needs neither to be burned up nor to be saved. The world is, we are. Transients, if we buck it; here to stay if we accept it. ~H.Miller
GreenHorizons
Profile Blog Joined April 2011
United States23435 Posts
March 25 2015 09:18 GMT
#82
On March 25 2015 17:11 Biff The Understudy wrote:
The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.

The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.

The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.

Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.

I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.



I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.

It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).

I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.

Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...

It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
excitedBear
Profile Joined March 2015
Austria120 Posts
March 25 2015 09:23 GMT
#83
Consciousness is just a consequence of sensory inputs, neuron anatomy, neuronal wiring, peptide signalling, synaptic plasticity, hormonal states and neuronal integration. It is not a mechanism per se.

Getting this right depends on literally billions of molecular parameters that are the result of evolution.
Each nerve cell has its unique composition and identity due to transcription factors that define the expression patterns of genes.
Each nerve cell is connected to 10-20 other nerve cells via 1000s of synapses.
Each. Single. One.
We have 100 billion nerve cells. That's 1000 trillion synaptic connections.

Each synaptic connection has its own meaning as a result of millions of years of evolution.
And the whole thing is highly adaptive: synaptic connections can become stronger and weaker, even whole new neurons can form in special regions.

It is quite clear that the von Neumann architecture cannot even begin to capture this complexity.
To build a computer like our brain, we would have to first understand our brain entirely.
However, that is not going to happen within the next 100 years.

There are attempts to mimic the architecture of our brain (see IBM's brain-like chip), however these are really nothing in comparison to what's really going on.
Daray
Profile Blog Joined August 2010
6006 Posts
March 25 2015 09:38 GMT
#84
On March 25 2015 18:18 GreenHorizons wrote:
Show nested quote +
On March 25 2015 17:11 Biff The Understudy wrote:
The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.

The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.

The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.

Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.

I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.



I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.

It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).

I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.

Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...

It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.


Some sort of military robot turning against it's owner is quite the stretch to technological singularity.
Faggatron
Profile Joined April 2011
United Kingdom65 Posts
March 25 2015 09:41 GMT
#85
On March 25 2015 18:23 excitedBear wrote:
To build a computer like our brain, we would have to first understand our brain entirely.


But in theory if you were able to scan the entire brain and simulate it within a computer program you would only have to understand how it works generally, rather than precisely what every single synapse connection means. Sure, there could be other unforseen complications so obviously nothing is for sure, but reverse engineering the brain can come after.

Of course the scanning equipment and computer power aren't there yet. But they're working on it.

Just wait.. by 2045 we'll have hologram bodies:
Hollow
Profile Blog Joined July 2005
Canada2180 Posts
Last Edited: 2015-03-25 10:06:52
March 25 2015 10:06 GMT
#86
^ That video was hilarious. Thanks for the laugh. Are there any other joke videos like this that make fun of naive, desperate transhumanists?
GreenHorizons
Profile Blog Joined April 2011
United States23435 Posts
March 25 2015 10:42 GMT
#87
On March 25 2015 18:38 Daray wrote:
Show nested quote +
On March 25 2015 18:18 GreenHorizons wrote:
On March 25 2015 17:11 Biff The Understudy wrote:
The whole singularity debate is ridiculous; it assumes that our computers are on their way to become intelligent. Keep perfecting them, and one day "tadaaa", they will be thinking by themselves.

The problem is that a computer is just a powerful calculating machine, and that intelligence and consciousness have probably very, very little to do with computing and calculating. Computers can mimic (and it's interesting that Kurzweil, who is the pope of the transhumanist bullshit works for speech recognition these days, that looks like intelligence but has nothing to do with it), but they don't think one little bit.

The other problem is that we are probably at point 0 about understanding intelligence and consciousness. Contrarily to computers, consciousness is not algorithm based. Penrose came with the idea that quantum mechanics could explain consciousness, but that's very controversial. And we can't build a machine that does something we have absolutely no clue about.

Now, transhumanist say "oh but we are going so fast, it has to happen in (insert a completely random date)". The thing is that you can make prediction about us curing cancer, because we are on our way. You might be wrong, but you can. But you can't make prediction about something for which we are at point 0. It can take 15 years or three thousands.

I know Hawking, Gates and some other people talk about it to make some publicity because of people's "Frankenstein complex", to quote Asimov. Everybody loves the chill of skynet stories. It doesn't make them right.



I think one of the issues with this line of thinking is that it's based on human intelligence as we know it. AI doesn't have to smart "like us". AI usually involves some form of self-hacking. Where it can take information, interpret it and modify outcomes accordingly.

It's not hard to see how that takes it to a place where it sees humanity as a threat to itself and the AI, and acts accordingly to preserve itself and humanity through means we don't agree with (Matrix/AI).

I mean there is a very real possibility of unmanned military vehicles/weapons frames being programmed with limited forms of automated self preservation. That automated self-preservation goes haywire and misidentifies friend and foe and you see how we're already on the road.

Imagine a quantum computer with a more advanced self-preservation directive and one can see pretty easily how even without a "consciousness" said system could become a threat. Or how even if there wasn't a self-preservation directive one might arise as a defense to getting corrupted/breached/etc...

It's not something that's likely 10-20 or even 30 years out but 100 is totally possible maybe as few as 50. But I'd bet WW III put's the brakes on that before then.


Some sort of military robot turning against it's owner is quite the stretch to technological singularity.


Oh no, I was saying provided we avoid WW III that they have a better chance of turning against us with a "dumb" AI before they were "smart like us".

The emphasis being on AI not needing to be 'like' our brains to practically perform adequately to be a significant threat.

I'm not convinced the way our brain functions is the 'best' way to think either. So I'm not sure AI has to think like we do for it to be practically more intelligent in many ways. Ant's aren't extremely intelligent but they can get quite a bit done and will probably be here after we're gone.

"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Normal
Please log in or register to reply.
Live Events Refresh
Next event in 8h 51m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
JuggernautJason106
Vindicta 36
ROOTCatZ 0
StarCraft: Brood War
Britney 14689
NaDa 20
iFU.spx 2
Dota 2
capcasts126
Counter-Strike
fl0m1578
Stewie2K403
Heroes of the Storm
Khaldor258
Other Games
summit1g8545
FrodaN2937
Grubby2907
Mlord645
Artosis490
Skadoodle233
KnowMe178
ViBE113
Mew2King30
Organizations
Other Games
gamesdonequick1143
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 18 non-featured ]
StarCraft 2
• RyuSc2 67
• Hupsaiya 61
• musti20045 44
• davetesta29
• Freeedom4
• AfreecaTV YouTube
• sooper7s
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
StarCraft: Brood War
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota21318
Other Games
• imaqtpie1279
• Shiphtur197
Upcoming Events
Afreeca Starleague
8h 51m
Snow vs Soma
Sparkling Tuna Cup
10h 51m
WardiTV Invitational
12h 51m
CrankTV Team League
13h 51m
BASILISK vs Streamerzone
Team Liquid vs Shopify Rebellion
Team Vitality vs Team Falcon
BSL Team A[vengers]
15h 51m
Gypsy vs nOOB
JDConan vs Scan
RSL Revival
17h 51m
Wardi Open
1d 12h
CrankTV Team League
1d 13h
Replay Cast
2 days
WardiTV Invitational
2 days
[ Show More ]
CrankTV Team League
2 days
Replay Cast
3 days
CrankTV Team League
3 days
Replay Cast
3 days
The PondCast
4 days
CrankTV Team League
4 days
Replay Cast
5 days
WardiTV Invitational
5 days
CrankTV Team League
5 days
Replay Cast
6 days
Liquipedia Results

Completed

Acropolis #4 - TS2
WardiTV TLMC #15
HCC Europe

Ongoing

BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
EC S1
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025

Upcoming

SC4ALL: Brood War
BSL Season 21
BSL 21 Team A
BSL 21 Non-Korean Championship
RSL Offline Finals
RSL Revival: Season 3
Stellar Fest
SC4ALL: StarCraft II
CranK Gathers Season 2: SC II Pro Teams
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.