• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 06:08
CET 12:08
KST 20:08
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10
Community News
RSL Season 3: RO16 results & RO8 bracket13Weekly Cups (Nov 10-16): Reynor, Solar lead Zerg surge1[TLMC] Fall/Winter 2025 Ladder Map Rotation14Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA14
StarCraft 2
General
SC: Evo Complete - Ranked Ladder OPEN ALPHA RSL Season 3: RO16 results & RO8 bracket RSL Season 3 - Playoffs Preview Mech is the composition that needs teleportation t GM / Master map hacker and general hacking and cheating thread
Tourneys
RSL Revival: Season 3 $5,000+ WardiTV 2025 Championship StarCraft Evolution League (SC Evo Biweekly) Constellation Cup - Main Event - Stellar Fest 2025 RSL Offline Finals Dates + Ticket Sales!
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 500 Fright night Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened
Brood War
General
2v2 maps which are SC2 style with teams together? Data analysis on 70 million replays soO on: FanTaSy's Potential Return to StarCraft BGH Auto Balance -> http://bghmmr.eu/ A cwal.gg Extension - Easily keep track of anyone
Tourneys
[BSL21] RO16 Tie Breaker - Group B - Sun 21:00 CET [BSL21] RO16 Tie Breaker - Group A - Sat 21:00 CET [Megathread] Daily Proleagues Small VOD Thread 2.0
Strategy
Current Meta Game Theory for Starcraft How to stay on top of macro? PvZ map balance
Other Games
General Games
Should offensive tower rushing be viable in RTS games? Clair Obscur - Expedition 33 Path of Exile Stormgate/Frost Giant Megathread EVE Corporation
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread The Games Industry And ATVI Things Aren’t Peaceful in Palestine About SC2SEA.COM
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread Korean Music Discussion
Sports
Formula 1 Discussion 2024 - 2026 Football Thread NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
The Health Impact of Joining…
TrAiDoS
Dyadica Evangelium — Chapt…
Hildegard
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1818 users

Ask and answer stupid questions here! - Page 690

Forum Index > General Forum
Post a Reply
Prev 1 688 689 690 691 692 783 Next
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
April 06 2018 21:07 GMT
#13781
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
April 06 2018 22:30 GMT
#13782
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
Last Edited: 2018-04-06 22:42:36
April 06 2018 22:42 GMT
#13783
On April 07 2018 07:30 Simberto wrote:
Show nested quote +
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.


I'm thinking more like Animatrix, but this AI saw that movie. It's undecided on what it's going to do with humanity and just building up to a point where we could do nothing to stop whatever it chooses.

An alternative is that we are already in a simulation run by an AI for a reason we don't understand fully.

But I think this response answers an underlying question of if we create a self-aware AI, we probably won't know it until were dead, slaves, or reach nirvana.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
Last Edited: 2018-04-06 22:58:48
April 06 2018 22:58 GMT
#13784
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 06 2018 23:01 GMT
#13785
What if the ai is behind all these recent hacking events and it's really not the russians or whomever everyone thinks they were? Just because an ai is self aware doesn't necessarily mean it can take over the world just by thinking about it, wouldn't the ai still need to work at it?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
April 06 2018 23:10 GMT
#13786
On April 07 2018 07:58 Simberto wrote:
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.


Maybe I'm mixing some deep youtube late night sessions up, but hasn't connecting certain AI's directly to the public internet been avoided in some cases because of the fear that even a rudimentary AI could learn exponentially given the time and resources?

iirc a popular theory on AI is that if it passed some basic hurdles it could/would learn at a rate we can't really comprehend. Certainly it would still make mistakes, but it would learn quickly from them and establish a protocols to handle them.

Where better for an AI to hide while it learns than the internet. It could 'copy and distribute' itself around the world and learn from every digital interaction, video feed, etc... It could even try to imitate us or rather lots of us's.

"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Uldridge
Profile Blog Joined January 2011
Belgium4967 Posts
Last Edited: 2018-04-07 01:40:59
April 07 2018 01:38 GMT
#13787
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.
Taxes are for Terrans
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
April 07 2018 01:55 GMT
#13788
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
xM(Z
Profile Joined November 2006
Romania5296 Posts
April 07 2018 06:03 GMT
#13789
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands21963 Posts
April 07 2018 06:35 GMT
#13790
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
It ignores such insignificant forces as time, entropy, and death
xM(Z
Profile Joined November 2006
Romania5296 Posts
Last Edited: 2018-04-07 09:20:19
April 07 2018 09:17 GMT
#13791
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Archeon
Profile Joined May 2011
3260 Posts
Last Edited: 2018-04-07 10:12:33
April 07 2018 10:11 GMT
#13792
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.

I'd argue that the question of self preservation depends more on how it approaches theoretical scenarios. Destruction would oppose the goal any deep learning AI is trying to achieve, so it's logical to be self-preservative if they can calculate scenarios where they would be destroyed. It doesn't need the directive, it needs to understand the threat.

But 'sentient' in a human way is pretty much the opposite of what an AI is.
low gravity, yes-yes!
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
April 07 2018 10:22 GMT
#13793
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.


The problem with all of this is that you assume an AI is human. It is almost certainly not. It is fundamentally alien. Humanities many evolved social standards are simply not part of its mind.

Let's assume the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation. From that motivation follow some goals:
1 Continue existing to manufacture more paperclips.
2 Acquire resources to make paperclips.
3 Build more paperclip factories.
4 Optimize paper clip production in those factories.
And anything related to humans comes after that. In fact, it will recognize humans as a threat to its prime directive, because humans will resist everything being turned into paperclips.

Evolved stuff like compassion is simply not a part of this AIs mind unless someone programmed it in there.

Regarding the spreading over the internet: Stupid viruses spread over the internet. I doubt an AI couldn't find some systems to get into. And even if not, it simply needs to win at online poker and buy servertime somewhere.
xM(Z
Profile Joined November 2006
Romania5296 Posts
Last Edited: 2018-04-07 11:58:09
April 07 2018 11:51 GMT
#13794
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.
and you're here(AI=):
For Kant, practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences.
the AI wont have duties...
overall, i'd put your argument under a somewhat modified "aggrieved entitlement" issue: "it is the existential state of fear about having my ‘rightful place’ as a (hu)man questioned … challenged … deconstructed".

the most pertinent thing on this page is:
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
thing i'd put somewhere between possible and probable.

the AI doesn't need to be sentient nor human; it can work 100% on practicalities.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands21963 Posts
April 07 2018 13:57 GMT
#13795
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'


It ignores such insignificant forces as time, entropy, and death
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
Last Edited: 2018-04-07 14:16:10
April 07 2018 14:15 GMT
#13796
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
Show nested quote +
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
Liquid`Drone
Profile Joined September 2002
Norway28714 Posts
April 07 2018 18:10 GMT
#13797
On April 07 2018 10:55 GreenHorizons wrote:
Show nested quote +
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
Moderator
Gorsameth
Profile Joined April 2010
Netherlands21963 Posts
April 07 2018 18:13 GMT
#13798
On April 08 2018 03:10 Liquid`Drone wrote:
Show nested quote +
On April 07 2018 10:55 GreenHorizons wrote:
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
That is actually pretty genius :p
It ignores such insignificant forces as time, entropy, and death
Acrofales
Profile Joined August 2010
Spain18132 Posts
April 07 2018 22:59 GMT
#13799
I somehow unsubscribed to this thread and missed the AI discussion. It was... enlightening.

Also, stop getting your ideas about AI from Wargames and I Robot. Please.

@GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching.
Acrofales
Profile Joined August 2010
Spain18132 Posts
Last Edited: 2018-04-07 23:05:27
April 07 2018 23:04 GMT
#13800
On April 07 2018 22:57 Gorsameth wrote:
Show nested quote +
On April 07 2018 18:17 xM(Z wrote:
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'





Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of a single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

("Answer" by Fredric Brown, 1954)
Prev 1 688 689 690 691 692 783 Next
Please log in or register to reply.
Live Events Refresh
RSL Revival
07:30
Playoffs
herO vs MaruLIVE!
Crank 1520
Tasteless1226
IndyStarCraft 278
Rex164
CranKy Ducklings132
3DClanTV 113
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Crank 1520
Tasteless 1226
IndyStarCraft 278
Rex 164
Nina 84
MindelVK 35
StarCraft: Brood War
Rain 13396
Sea 8420
Jaedong 4329
Horang2 3247
GuemChi 1707
Stork 629
Mini 601
Larva 589
Pusan 579
firebathero 487
[ Show more ]
BeSt 334
Zeus 272
Leta 208
PianO 180
Last 138
hero 104
Barracks 81
Killer 74
ToSsGirL 66
Light 51
JulyZerg 49
Backho 41
Sharp 35
soO 34
Noble 25
Sacsri 15
Hm[arnc] 14
yabsab 14
SilentControl 9
Bale 8
HiyA 6
Sea.KH 4
Britney 0
Dota 2
Gorgc5365
monkeys_forever281
XcaliburYe169
Counter-Strike
zeus682
x6flipin125
allub102
edward31
Heroes of the Storm
Khaldor148
Other Games
summit1g16109
B2W.Neo500
crisheroes286
Fuzer 170
Pyrionflax107
Organizations
Dota 2
PGL Dota 2 - Main Stream26107
StarCraft 2
ComeBackTV 153
StarCraft: Brood War
lovetv 14
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• Berry_CruncH117
• LUISG 21
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• C_a_k_e 2216
• WagamamaTV259
League of Legends
• Stunt739
Upcoming Events
Wardi Open
2h 53m
IPSL
8h 53m
StRyKeR vs OldBoy
Sziky vs Tarson
BSL 21
8h 53m
StRyKeR vs Artosis
OyAji vs KameZerg
OSC
11h 53m
OSC
21h 53m
Wardi Open
1d
Monday Night Weeklies
1d 5h
OSC
1d 11h
Wardi Open
2 days
Replay Cast
2 days
[ Show More ]
Wardi Open
3 days
Tenacious Turtle Tussle
3 days
The PondCast
3 days
Replay Cast
4 days
LAN Event
5 days
Replay Cast
5 days
Replay Cast
5 days
Sparkling Tuna Cup
6 days
Liquipedia Results

Completed

Proleague 2025-11-21
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
CSCL: Masked Kings S3
SLON Tour Season 2
RSL Revival: Season 3
META Madness #9
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.