• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 21:30
CEST 03:30
KST 10:30
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun12[ASL21] Ro8 Preview Pt1: Inheritors16[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists22[ASL21] Ro16 Preview Pt1: Fresh Flow9
Community News
Code S Season 1 (2026) - RO12 Results02026 GSL Season 1 Qualifiers25Maestros of the Game 2 announced92026 GSL Tour plans announced15Weekly Cups (April 6-12): herO doubles, "Villains" prevail1
StarCraft 2
General
Code S Season 1 (2026) - RO12 Results Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun Team Liquid Map Contest #22 - The Finalists Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool MaNa leaves Team Liquid
Tourneys
RSL Revival: Season 5 - Qualifiers and Main Event GSL Code S Season 1 (2026) SC2 INu's Battles#15 <BO.9 2Matches> WardiTV Spring Cup SEL Masters #6 - Solar vs Classic (SC: Evo)
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
[BSL22] RO16 Group A - Sunday 21:00 CEST [BSL22] RO16 Group B - Saturday 21:00 CEST Pros React To: Leta vs Tulbo (ASL S21, Ro.8) RepMastered™: replay sharing and analyzer site BW General Discussion
Tourneys
Escore Tournament StarCraft Season 2 [BSL22] RO16 Group Stage - 02 - 10 May [Megathread] Daily Proleagues [ASL21] Ro8 Day 2
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Daigo vs Menard Best of 10 Stormgate/Frost Giant Megathread Nintendo Switch Thread Dawn of War IV Diablo IV
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
European Politico-economics QA Mega-thread US Politics Mega-thread Russo-Ukrainian War Thread 3D technology/software discussion Canadian Politics Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1934 users

Ask and answer stupid questions here! - Page 690

Forum Index > General Forum
Post a Reply
Prev 1 688 689 690 691 692 783 Next
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
April 06 2018 21:07 GMT
#13781
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
April 06 2018 22:30 GMT
#13782
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
Last Edited: 2018-04-06 22:42:36
April 06 2018 22:42 GMT
#13783
On April 07 2018 07:30 Simberto wrote:
Show nested quote +
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.


I'm thinking more like Animatrix, but this AI saw that movie. It's undecided on what it's going to do with humanity and just building up to a point where we could do nothing to stop whatever it chooses.

An alternative is that we are already in a simulation run by an AI for a reason we don't understand fully.

But I think this response answers an underlying question of if we create a self-aware AI, we probably won't know it until were dead, slaves, or reach nirvana.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
Last Edited: 2018-04-06 22:58:48
April 06 2018 22:58 GMT
#13784
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 06 2018 23:01 GMT
#13785
What if the ai is behind all these recent hacking events and it's really not the russians or whomever everyone thinks they were? Just because an ai is self aware doesn't necessarily mean it can take over the world just by thinking about it, wouldn't the ai still need to work at it?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
April 06 2018 23:10 GMT
#13786
On April 07 2018 07:58 Simberto wrote:
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.


Maybe I'm mixing some deep youtube late night sessions up, but hasn't connecting certain AI's directly to the public internet been avoided in some cases because of the fear that even a rudimentary AI could learn exponentially given the time and resources?

iirc a popular theory on AI is that if it passed some basic hurdles it could/would learn at a rate we can't really comprehend. Certainly it would still make mistakes, but it would learn quickly from them and establish a protocols to handle them.

Where better for an AI to hide while it learns than the internet. It could 'copy and distribute' itself around the world and learn from every digital interaction, video feed, etc... It could even try to imitate us or rather lots of us's.

"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Uldridge
Profile Blog Joined January 2011
Belgium5121 Posts
Last Edited: 2018-04-07 01:40:59
April 07 2018 01:38 GMT
#13787
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.
Taxes are for Terrans
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
April 07 2018 01:55 GMT
#13788
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
xM(Z
Profile Joined November 2006
Romania5299 Posts
April 07 2018 06:03 GMT
#13789
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands22308 Posts
April 07 2018 06:35 GMT
#13790
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
It ignores such insignificant forces as time, entropy, and death
xM(Z
Profile Joined November 2006
Romania5299 Posts
Last Edited: 2018-04-07 09:20:19
April 07 2018 09:17 GMT
#13791
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Archeon
Profile Joined May 2011
3265 Posts
Last Edited: 2018-04-07 10:12:33
April 07 2018 10:11 GMT
#13792
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.

I'd argue that the question of self preservation depends more on how it approaches theoretical scenarios. Destruction would oppose the goal any deep learning AI is trying to achieve, so it's logical to be self-preservative if they can calculate scenarios where they would be destroyed. It doesn't need the directive, it needs to understand the threat.

But 'sentient' in a human way is pretty much the opposite of what an AI is.
low gravity, yes-yes!
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
April 07 2018 10:22 GMT
#13793
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.


The problem with all of this is that you assume an AI is human. It is almost certainly not. It is fundamentally alien. Humanities many evolved social standards are simply not part of its mind.

Let's assume the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation. From that motivation follow some goals:
1 Continue existing to manufacture more paperclips.
2 Acquire resources to make paperclips.
3 Build more paperclip factories.
4 Optimize paper clip production in those factories.
And anything related to humans comes after that. In fact, it will recognize humans as a threat to its prime directive, because humans will resist everything being turned into paperclips.

Evolved stuff like compassion is simply not a part of this AIs mind unless someone programmed it in there.

Regarding the spreading over the internet: Stupid viruses spread over the internet. I doubt an AI couldn't find some systems to get into. And even if not, it simply needs to win at online poker and buy servertime somewhere.
xM(Z
Profile Joined November 2006
Romania5299 Posts
Last Edited: 2018-04-07 11:58:09
April 07 2018 11:51 GMT
#13794
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.
and you're here(AI=):
For Kant, practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences.
the AI wont have duties...
overall, i'd put your argument under a somewhat modified "aggrieved entitlement" issue: "it is the existential state of fear about having my ‘rightful place’ as a (hu)man questioned … challenged … deconstructed".

the most pertinent thing on this page is:
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
thing i'd put somewhere between possible and probable.

the AI doesn't need to be sentient nor human; it can work 100% on practicalities.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands22308 Posts
April 07 2018 13:57 GMT
#13795
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'


It ignores such insignificant forces as time, entropy, and death
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
Last Edited: 2018-04-07 14:16:10
April 07 2018 14:15 GMT
#13796
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
Show nested quote +
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
Liquid`Drone
Profile Joined September 2002
Norway28792 Posts
April 07 2018 18:10 GMT
#13797
On April 07 2018 10:55 GreenHorizons wrote:
Show nested quote +
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
Moderator
Gorsameth
Profile Joined April 2010
Netherlands22308 Posts
April 07 2018 18:13 GMT
#13798
On April 08 2018 03:10 Liquid`Drone wrote:
Show nested quote +
On April 07 2018 10:55 GreenHorizons wrote:
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
That is actually pretty genius :p
It ignores such insignificant forces as time, entropy, and death
Acrofales
Profile Joined August 2010
Spain18284 Posts
April 07 2018 22:59 GMT
#13799
I somehow unsubscribed to this thread and missed the AI discussion. It was... enlightening.

Also, stop getting your ideas about AI from Wargames and I Robot. Please.

@GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching.
Acrofales
Profile Joined August 2010
Spain18284 Posts
Last Edited: 2018-04-07 23:05:27
April 07 2018 23:04 GMT
#13800
On April 07 2018 22:57 Gorsameth wrote:
Show nested quote +
On April 07 2018 18:17 xM(Z wrote:
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'





Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of a single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

("Answer" by Fredric Brown, 1954)
Prev 1 688 689 690 691 692 783 Next
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
2026 GSL S1: Ro12 Group A
CranKy Ducklings92
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft331
SpeCial 220
RuFF_SC2 70
ProTech58
ROOTCatZ 44
StarCraft: Brood War
NaDa 9
Dota 2
monkeys_forever858
League of Legends
Doublelift3254
Counter-Strike
fl0m4216
taco 509
Super Smash Bros
C9.Mang0330
Other Games
gofns8460
tarik_tv7486
summit1g6799
JimRising 349
WinterStarcraft267
ViBE54
amsayoshi28
Organizations
Other Games
gamesdonequick917
BasetradeTV149
Dota 2
PGL Dota 2 - Main Stream56
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 11 non-featured ]
StarCraft 2
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Other Games
• Scarra1152
Upcoming Events
Replay Cast
7h 31m
RSL Revival
8h 31m
Classic vs GgMaChine
Rogue vs Maru
WardiTV Invitational
9h 31m
Percival vs Shameless
ByuN vs YoungYakov
IPSL
14h 31m
Ret vs Art_Of_Turtle
Radley vs TBD
BSL
17h 31m
Replay Cast
22h 31m
RSL Revival
1d 8h
herO vs TriGGeR
NightMare vs Solar
uThermal 2v2 Circuit
1d 12h
BSL
1d 17h
IPSL
1d 17h
eOnzErG vs TBD
G5 vs Nesh
[ Show More ]
Patches Events
1d 22h
Replay Cast
2 days
Wardi Open
2 days
Afreeca Starleague
2 days
Jaedong vs Light
Monday Night Weeklies
2 days
Replay Cast
2 days
Sparkling Tuna Cup
3 days
Afreeca Starleague
3 days
Snow vs Flash
WardiTV Invitational
3 days
GSL
4 days
Classic vs Cure
Maru vs Rogue
GSL
5 days
SHIN vs Zoun
ByuN vs herO
Replay Cast
5 days
Escore
6 days
The PondCast
6 days
WardiTV Invitational
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Escore Tournament S2: W5
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
KK 2v2 League Season 1
Acropolis #4
SCTL 2026 Spring
RSL Revival: Season 5
2026 GSL S1
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.