• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 22:58
CEST 04:58
KST 11:58
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Ro24 Preview Pt1: Runway112v2 & SC: Evo Complete: Weekend Double Feature2Team Liquid Map Contest #21 - Presented by Monster Energy9uThermal's 2v2 Tour: $15,000 Main Event18Serral wins EWC 202549
Community News
Weekly Cups (Aug 11-17): MaxPax triples again!8Weekly Cups (Aug 4-10): MaxPax wins a triple6SC2's Safe House 2 - October 18 & 195Weekly Cups (Jul 28-Aug 3): herO doubles up6LiuLi Cup - August 2025 Tournaments7
StarCraft 2
General
Weekly Cups (Aug 11-17): MaxPax triples again! RSL Revival patreon money discussion thread What mix of new and old maps do you want in the next 1v1 ladder pool? (SC2) : Team Liquid Map Contest #21 - Presented by Monster Energy Would you prefer the game to be balanced around top-tier pro level or average pro level?
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament RSL: Revival, a new crowdfunded tournament series LiuLi Cup - August 2025 Tournaments SEL Masters #5 - Korea vs Russia (SC Evo) Enki Epic Series #5 - TaeJa vs Classic (SC Evo)
Strategy
Custom Maps
External Content
Mutation # 487 Think Fast Mutation # 486 Watch the Skies Mutation # 485 Death from Below Mutation # 484 Magnetic Pull
Brood War
General
ASL 20 HYPE VIDEO! Flash Announces (and Retracts) Hiatus From ASL BW General Discussion New season has just come in ladder [ASL20] Ro24 Preview Pt1: Runway
Tourneys
[ASL20] Ro24 Group A BWCL Season 63 Announcement Cosmonarchy Pro Showmatches KCM 2025 Season 3
Strategy
Simple Questions, Simple Answers Fighting Spirit mining rates [G] Mineral Boosting Muta micro map competition
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Total Annihilation Server - TAForever Beyond All Reason [MMORPG] Tree of Savior (Successor of Ragnarok)
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread The Games Industry And ATVI
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread Korean Music Discussion
Sports
2024 - 2025 Football Thread TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
The Biochemical Cost of Gami…
TrAiDoS
[Girl blog} My fema…
artosisisthebest
Sharpening the Filtration…
frozenclaw
ASL S20 English Commentary…
namkraft
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1267 users

Ask and answer stupid questions here! - Page 693

Forum Index > General Forum
Post a Reply
Prev 1 691 692 693 694 695 783 Next
Uldridge
Profile Blog Joined January 2011
Belgium4800 Posts
April 24 2018 11:02 GMT
#13841
We are far from the general I guess, but I've found this via a Youtube channel I follow and it shows how AI comes up with solutions that we wouldn't necessarily consider, given a set of instructions (out of the box thinking; creativity; finding loopholes; whatever you want to name it)
The paper in question: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
It's certainly interesting how fast the learning curve for a neural net can be for specific problems and if certain areas are connected, I'm pretty sure a more general neural net that what we already have (even if it's only a small subset of what it can possibly work on) can give good, or even innovative solutions to already existing problems.
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5281 Posts
April 24 2018 12:01 GMT
#13842
On April 24 2018 13:57 Myrddraal wrote:
Show nested quote +
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 24 2018 12:44 GMT
#13843
On April 24 2018 21:01 xM(Z wrote:
Show nested quote +
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
Epishade
Profile Blog Joined November 2011
United States2267 Posts
April 26 2018 06:38 GMT
#13844
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?
Pinhead Larry in the streets, Dirty Dan in the sheets.
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
April 26 2018 06:49 GMT
#13845
If you can set it up before knowing the choices, you can choose whichever option is first in the alphabet. It is not very random obviously, but should solve that single case problem.
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 26 2018 07:07 GMT
#13846
On April 26 2018 15:38 Epishade wrote:
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?

Come up with any system that is sufficiently complicated that you can't "unconsciously" calculate the outcome, and has the same chance of picking one item or the other. E.g. pick a number n that is greater than 10. If the nth digit of pi is even, pick the left item. Otherwise pick the right item. If you suspect you are familiar enough with pi to be able to cheat, pick a number > 100. Or use the nth digit of e instead.

If you don't have any way of looking up the digits of pi, you can calculate them through tailor expansion of a machin-like formula. Have fun!
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
April 26 2018 07:16 GMT
#13847
Very cool solution!

I think e works better than Pi since the series is easier to calculate in your head.

Also, i just realized, is this question basically trying to figure out how to set up a roleplaying group in Platos cave?
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:33 GMT
#13848
--- Nuked ---
farvacola
Profile Blog Joined January 2011
United States18828 Posts
April 27 2018 19:38 GMT
#13849
"I hadn't noticed."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:38 GMT
#13850
--- Nuked ---
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 27 2018 19:48 GMT
#13851
Thanks, I grew me myself
CUTE MAKES RIGHT
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 27 2018 20:05 GMT
#13852
"No I'm not"
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Uldridge
Profile Blog Joined January 2011
Belgium4800 Posts
April 28 2018 02:53 GMT
#13853
"Tall boys are called men"
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-28 06:12:26
April 28 2018 06:09 GMT
#13854
On April 24 2018 21:44 Acrofales wrote:
Show nested quote +
On April 24 2018 21:01 xM(Z wrote:
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
i'm short on time these days so i don't have time to ramble on this but you seem stuck on the notion of a subservient AI, one which you control either by its make-up/design or by guilt(human goals/ethics/emotions). all i can say here is - ditch your white man issues/complexes, you(as a human) are not the end all, be all.

other than that, regardless of however you design your AI and how many fail safes you add it, there will be a point in which the AI will birth itself into being and be separate from your building constrains. before that, we're talking about a machine we control(it may look smart but it'll still be a machine) and after that point, we'll talk about a being.
(from my pov, you only talk about the former which is not interesting)
To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
The_Templar
Profile Blog Joined January 2011
your Country52797 Posts
Last Edited: 2018-04-28 13:23:10
April 28 2018 13:22 GMT
#13855
On April 28 2018 04:33 JimmiC wrote:
What is the correct response to "boy your tall" or something similar.

I've used "thank you" and "it's true" but neither feels right.

"No, I'm not."

The taller you are, the better this response is.

Alternatively, since that has already been said, "thanks, you too" is also acceptable.
Moderatorshe/her
TL+ Member
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-28 13:32:34
April 28 2018 13:30 GMT
#13856
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 28 2018 13:52 GMT
#13857
On April 28 2018 22:30 GreenHorizons wrote:
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.

"No, no, not at all. I'm just the one on top. William, let's go".
Completely straight-faced, walk away after.
CUTE MAKES RIGHT
Wrath
Profile Blog Joined July 2014
3174 Posts
April 30 2018 20:50 GMT
#13858
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?
Simberto
Profile Blog Joined July 2010
Germany11521 Posts
April 30 2018 21:32 GMT
#13859
Whatever has some kind of decent reviews on amazon. You are not going to get a good headset at that price.
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 30 2018 21:39 GMT
#13860
On May 01 2018 05:50 Wrath wrote:
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?

Sades are okay, Sentey has some decent ones at that price. Nothing's going to be too great at that price but both those brands have ones that should have ~4 star reviews.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Prev 1 691 692 693 694 695 783 Next
Please log in or register to reply.
Live Events Refresh
OSC
00:00
Elite Rising Star #16 - Day 3
CranKy Ducklings84
Liquipedia
The PiG Daily
22:45
Best Games of SC
Reynor vs Zoun
Classic vs Clem
herO vs Solar
Serral vs TBD
PiGStarcraft522
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft522
Nina 189
CosmosSc2 37
StarCraft: Brood War
Artosis 810
ggaemo 74
Noble 73
NaDa 72
Icarus 8
Dota 2
monkeys_forever511
NeuroSwarm139
LuMiX1
Counter-Strike
Fnx 3150
Stewie2K357
Other Games
tarik_tv11148
summit1g11016
shahzam756
JimRising 541
C9.Mang0526
Day[9].tv437
WinterStarcraft163
Maynarde141
Trikslyr53
RuFF_SC218
Organizations
Other Games
gamesdonequick1448
BasetradeTV104
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• Berry_CruncH156
• practicex 16
• Mapu4
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Other Games
• Day9tv437
Upcoming Events
Replay Cast
7h 2m
Afreeca Starleague
7h 2m
JyJ vs TY
Bisu vs Speed
WardiTV Summer Champion…
8h 2m
Creator vs Rogue
MaxPax vs Cure
PiGosaur Monday
21h 2m
Afreeca Starleague
1d 7h
Mini vs TBD
Soma vs sSak
WardiTV Summer Champion…
1d 8h
Clem vs goblin
ByuN vs SHIN
Online Event
1d 21h
The PondCast
2 days
WardiTV Summer Champion…
2 days
Zoun vs Bunny
herO vs Solar
Replay Cast
2 days
[ Show More ]
LiuLi Cup
3 days
BSL Team Wars
3 days
Team Hawk vs Team Dewalt
Korean StarCraft League
4 days
CranKy Ducklings
4 days
SC Evo League
4 days
WardiTV Summer Champion…
4 days
Classic vs Percival
Spirit vs NightMare
[BSL 2025] Weekly
4 days
Sparkling Tuna Cup
5 days
SC Evo League
5 days
BSL Team Wars
5 days
Team Bonyth vs Team Sziky
Afreeca Starleague
6 days
Queen vs HyuN
EffOrt vs Calm
Wardi Open
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Jiahua Invitational
uThermal 2v2 Main Event
HCC Europe

Ongoing

Copa Latinoamericana 4
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20
CSL Season 18: Qualifier 1
SEL Season 2 Championship
WardiTV Summer 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025

Upcoming

CSLAN 3
CSL 2025 AUTUMN (S18)
LASL Season 20
BSL Season 21
BSL 21 Team A
RSL Revival: Season 2
Maestros of the Game
PGL Masters Bucharest 2025
Thunderpick World Champ.
MESA Nomadic Masters Fall
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.