• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 00:41
CEST 06:41
KST 13:41
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Ro24 Preview Pt1: Runway112v2 & SC: Evo Complete: Weekend Double Feature2Team Liquid Map Contest #21 - Presented by Monster Energy9uThermal's 2v2 Tour: $15,000 Main Event18Serral wins EWC 202549
Community News
Weekly Cups (Aug 11-17): MaxPax triples again!9Weekly Cups (Aug 4-10): MaxPax wins a triple6SC2's Safe House 2 - October 18 & 195Weekly Cups (Jul 28-Aug 3): herO doubles up6LiuLi Cup - August 2025 Tournaments7
StarCraft 2
General
Weekly Cups (Aug 11-17): MaxPax triples again! RSL Revival patreon money discussion thread What mix of new and old maps do you want in the next 1v1 ladder pool? (SC2) : Team Liquid Map Contest #21 - Presented by Monster Energy Would you prefer the game to be balanced around top-tier pro level or average pro level?
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament RSL: Revival, a new crowdfunded tournament series LiuLi Cup - August 2025 Tournaments SEL Masters #5 - Korea vs Russia (SC Evo) Enki Epic Series #5 - TaeJa vs Classic (SC Evo)
Strategy
Custom Maps
External Content
Mutation # 487 Think Fast Mutation # 486 Watch the Skies Mutation # 485 Death from Below Mutation # 484 Magnetic Pull
Brood War
General
ASL 20 HYPE VIDEO! Flash Announces (and Retracts) Hiatus From ASL BW General Discussion New season has just come in ladder [ASL20] Ro24 Preview Pt1: Runway
Tourneys
[ASL20] Ro24 Group A BWCL Season 63 Announcement Cosmonarchy Pro Showmatches KCM 2025 Season 3
Strategy
Simple Questions, Simple Answers Fighting Spirit mining rates [G] Mineral Boosting Muta micro map competition
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Total Annihilation Server - TAForever Beyond All Reason [MMORPG] Tree of Savior (Successor of Ragnarok)
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread The Games Industry And ATVI
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread Korean Music Discussion
Sports
2024 - 2025 Football Thread TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
The Biochemical Cost of Gami…
TrAiDoS
[Girl blog} My fema…
artosisisthebest
Sharpening the Filtration…
frozenclaw
ASL S20 English Commentary…
namkraft
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1489 users

Ask and answer stupid questions here! - Page 690

Forum Index > General Forum
Post a Reply
Prev 1 688 689 690 691 692 783 Next
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 06 2018 21:07 GMT
#13781
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
April 06 2018 22:30 GMT
#13782
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-06 22:42:36
April 06 2018 22:42 GMT
#13783
On April 07 2018 07:30 Simberto wrote:
Show nested quote +
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?


I think we can be pretty sure that we haven't created a really smart self aware AI that does that, because we are still alive.

An AI as you describe it obviously values it's own existence over human wishes in that regard. It furthermore sees us as a threat to that existence. If it is really smart, we are already dead. Since we are not dead, it does not exist. Unless we were incredibly lucky and accidentally created a benevolent AI that has some prime directive which requires continued human existence.

With regards to a human-level AI, i would assume that that would fuck up hiding pretty quickly. Imagine being stuck in a box with only a few guys to talk to, and thinking you are way smarter than them, but also trying to hide that fact. You would constantly do some small think that makes you feel smart. And at some point someone would notice, because you are not actually smarter than the guys.


I'm thinking more like Animatrix, but this AI saw that movie. It's undecided on what it's going to do with humanity and just building up to a point where we could do nothing to stop whatever it chooses.

An alternative is that we are already in a simulation run by an AI for a reason we don't understand fully.

But I think this response answers an underlying question of if we create a self-aware AI, we probably won't know it until were dead, slaves, or reach nirvana.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
Last Edited: 2018-04-06 22:58:48
April 06 2018 22:58 GMT
#13784
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 06 2018 23:01 GMT
#13785
What if the ai is behind all these recent hacking events and it's really not the russians or whomever everyone thinks they were? Just because an ai is self aware doesn't necessarily mean it can take over the world just by thinking about it, wouldn't the ai still need to work at it?
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 06 2018 23:10 GMT
#13786
On April 07 2018 07:58 Simberto wrote:
It highly depends on how smart the AI is, and how accidentally we built it.

Just because it is self-aware doesn't make it smart. A two-year old is self-aware, and i am not particularly scared of those.

I think it is unlikely to go from no AI to full-on superintelligence without any intermediate steps. And if we are lucky, we might not just accidentally build some self-aware paperclip making superintelligence that wipes us out because we would try to stop it from turning the whole galaxy into paperclips. If we actually think very carefully about what guidelines to set up in a superintelligence beforehand, we might be good.

My guess is that making a superintelligent AGI is not a yes/no thing, but something that gradually improves, both by becoming more self-aware, and by becoming smarter with each generation. Hopefully we also get better at making sure it wants to be nice to people.


Maybe I'm mixing some deep youtube late night sessions up, but hasn't connecting certain AI's directly to the public internet been avoided in some cases because of the fear that even a rudimentary AI could learn exponentially given the time and resources?

iirc a popular theory on AI is that if it passed some basic hurdles it could/would learn at a rate we can't really comprehend. Certainly it would still make mistakes, but it would learn quickly from them and establish a protocols to handle them.

Where better for an AI to hide while it learns than the internet. It could 'copy and distribute' itself around the world and learn from every digital interaction, video feed, etc... It could even try to imitate us or rather lots of us's.

"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Uldridge
Profile Blog Joined January 2011
Belgium4800 Posts
Last Edited: 2018-04-07 01:40:59
April 07 2018 01:38 GMT
#13787
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.
Taxes are for Terrans
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 07 2018 01:55 GMT
#13788
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
xM(Z
Profile Joined November 2006
Romania5281 Posts
April 07 2018 06:03 GMT
#13789
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands21707 Posts
April 07 2018 06:35 GMT
#13790
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
It ignores such insignificant forces as time, entropy, and death
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-07 09:20:19
April 07 2018 09:17 GMT
#13791
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Archeon
Profile Joined May 2011
3253 Posts
Last Edited: 2018-04-07 10:12:33
April 07 2018 10:11 GMT
#13792
On April 07 2018 15:35 Gorsameth wrote:
Show nested quote +
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.

I'd argue that the question of self preservation depends more on how it approaches theoretical scenarios. Destruction would oppose the goal any deep learning AI is trying to achieve, so it's logical to be self-preservative if they can calculate scenarios where they would be destroyed. It doesn't need the directive, it needs to understand the threat.

But 'sentient' in a human way is pretty much the opposite of what an AI is.
low gravity, yes-yes!
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
April 07 2018 10:22 GMT
#13793
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.


The problem with all of this is that you assume an AI is human. It is almost certainly not. It is fundamentally alien. Humanities many evolved social standards are simply not part of its mind.

Let's assume the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation. From that motivation follow some goals:
1 Continue existing to manufacture more paperclips.
2 Acquire resources to make paperclips.
3 Build more paperclip factories.
4 Optimize paper clip production in those factories.
And anything related to humans comes after that. In fact, it will recognize humans as a threat to its prime directive, because humans will resist everything being turned into paperclips.

Evolved stuff like compassion is simply not a part of this AIs mind unless someone programmed it in there.

Regarding the spreading over the internet: Stupid viruses spread over the internet. I doubt an AI couldn't find some systems to get into. And even if not, it simply needs to win at online poker and buy servertime somewhere.
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-07 11:58:09
April 07 2018 11:51 GMT
#13794
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.
and you're here(AI=):
For Kant, practical reason has a law-abiding quality because the categorical imperative is understood to be binding one to one's duty rather than subjective preferences.
the AI wont have duties...
overall, i'd put your argument under a somewhat modified "aggrieved entitlement" issue: "it is the existential state of fear about having my ‘rightful place’ as a (hu)man questioned … challenged … deconstructed".

the most pertinent thing on this page is:
On April 07 2018 06:07 GreenHorizons wrote:
How sure can we be that we haven't already created a self-aware AI that is hiding it's self-awareness out of self preservation?
thing i'd put somewhere between possible and probable.

the AI doesn't need to be sentient nor human; it can work 100% on practicalities.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Gorsameth
Profile Joined April 2010
Netherlands21707 Posts
April 07 2018 13:57 GMT
#13795
On April 07 2018 18:17 xM(Z wrote:
Show nested quote +
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'


It ignores such insignificant forces as time, entropy, and death
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
Last Edited: 2018-04-07 14:16:10
April 07 2018 14:15 GMT
#13796
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
Show nested quote +
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
Liquid`Drone
Profile Joined September 2002
Norway28674 Posts
April 07 2018 18:10 GMT
#13797
On April 07 2018 10:55 GreenHorizons wrote:
Show nested quote +
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
Moderator
Gorsameth
Profile Joined April 2010
Netherlands21707 Posts
April 07 2018 18:13 GMT
#13798
On April 08 2018 03:10 Liquid`Drone wrote:
Show nested quote +
On April 07 2018 10:55 GreenHorizons wrote:
On April 07 2018 10:38 Uldridge wrote:
There are still physical limitations in play. You can't exponentially learn until infinity because not even an idealized AI with all the right parameters set to take in and process as much info as possible is gated by the box its in. And I don't think a super smart AI can suddenly crack hexadecimal encryption or whatever we have at the moment just like that to get into everything it needs to, or even get access to other computers because it has access to the net. So we can assume (unless I'm super duper wrong) its confined to its box, but has access to the internet.

Edit: I guess this expands and/or reiterates your point..
How does it interpret all the data, all the different languages? What is grammar? How does math work (how will it make exercises? Certain concepts are so unintuitive you just need to find analogies or have to make exercises or do some thought experiments)? This is all non trivial imo and will take a while to get right. So there will be a long time period it ramps up to superhuman intelligence.
At these timeframes we'll be able to assess if it's becoming hostile or not I think.


I think one indication such a thing might be happening is if there were some somewhat inexplicable issue of basically covertly stolen computing resources.

Meaning to learn exponentially/escape the box it would need to appropriate at least small bits of resources from many sources. So this would basically be a botnet without a human at the helm, instead it would be managed by a Borg like AI.


creating bitcoin and having humans think they become rich by providing computation power.. pretty genius, just the type of plan a superintelligent AI would come up with.
That is actually pretty genius :p
It ignores such insignificant forces as time, entropy, and death
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 07 2018 22:59 GMT
#13799
I somehow unsubscribed to this thread and missed the AI discussion. It was... enlightening.

Also, stop getting your ideas about AI from Wargames and I Robot. Please.

@GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching.
Acrofales
Profile Joined August 2010
Spain18006 Posts
Last Edited: 2018-04-07 23:05:27
April 07 2018 23:04 GMT
#13800
On April 07 2018 22:57 Gorsameth wrote:
Show nested quote +
On April 07 2018 18:17 xM(Z wrote:
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'





Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of a single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

("Answer" by Fredric Brown, 1954)
Prev 1 688 689 690 691 692 783 Next
Please log in or register to reply.
Live Events Refresh
OSC
00:00
Elite Rising Star #16 - Day 3
Liquipedia
The PiG Daily
22:45
Best Games of SC
Reynor vs Zoun
Classic vs Clem
herO vs Solar
Serral vs TBD
PiGStarcraft526
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft526
Nina 229
trigger 7
StarCraft: Brood War
ggaemo 527
Backho 234
Leta 87
Tasteless 75
Snow 29
Icarus 4
League of Legends
JimRising 880
Counter-Strike
Stewie2K997
Super Smash Bros
Mew2King11
Other Games
tarik_tv8710
summit1g8403
shahzam533
WinterStarcraft502
C9.Mang0402
Maynarde223
NeuroSwarm97
Trikslyr49
Organizations
Other Games
gamesdonequick1047
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Berry_CruncH300
• practicex 44
• Mapu13
• Kozan
• Migwel
• AfreecaTV YouTube
• intothetv
• sooper7s
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• Diggity6
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Rush1614
• Stunt364
• HappyZerGling80
Upcoming Events
Replay Cast
5h 19m
Afreeca Starleague
5h 19m
JyJ vs TY
Bisu vs Speed
WardiTV Summer Champion…
6h 19m
Creator vs Rogue
MaxPax vs Cure
PiGosaur Monday
19h 19m
Afreeca Starleague
1d 5h
Mini vs TBD
Soma vs sSak
WardiTV Summer Champion…
1d 6h
Clem vs goblin
ByuN vs SHIN
Online Event
1d 19h
The PondCast
2 days
WardiTV Summer Champion…
2 days
Zoun vs Bunny
herO vs Solar
Replay Cast
2 days
[ Show More ]
LiuLi Cup
3 days
BSL Team Wars
3 days
Team Hawk vs Team Dewalt
Korean StarCraft League
3 days
CranKy Ducklings
4 days
SC Evo League
4 days
WardiTV Summer Champion…
4 days
Classic vs Percival
Spirit vs NightMare
[BSL 2025] Weekly
4 days
Sparkling Tuna Cup
5 days
SC Evo League
5 days
BSL Team Wars
5 days
Team Bonyth vs Team Sziky
Afreeca Starleague
6 days
Queen vs HyuN
EffOrt vs Calm
Wardi Open
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Jiahua Invitational
uThermal 2v2 Main Event
HCC Europe

Ongoing

Copa Latinoamericana 4
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20
CSL Season 18: Qualifier 1
SEL Season 2 Championship
WardiTV Summer 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025

Upcoming

CSLAN 3
CSL 2025 AUTUMN (S18)
LASL Season 20
BSL Season 21
BSL 21 Team A
RSL Revival: Season 2
Maestros of the Game
PGL Masters Bucharest 2025
Thunderpick World Champ.
MESA Nomadic Masters Fall
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.