• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 22:29
CEST 04:29
KST 11:29
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun13[ASL21] Ro8 Preview Pt1: Inheritors16[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists22[ASL21] Ro16 Preview Pt1: Fresh Flow9
Community News
RSL Revival: Season 5 - Qualifiers and Main Event8Code S Season 1 (2026) - RO12 Results02026 GSL Season 1 Qualifiers25Maestros of the Game 2 announced92026 GSL Tour plans announced15
StarCraft 2
General
Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun Code S Season 1 (2026) - RO12 Results Team Liquid Map Contest #22 - The Finalists Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool MaNa leaves Team Liquid
Tourneys
RSL Revival: Season 5 - Qualifiers and Main Event GSL Code S Season 1 (2026) SC2 INu's Battles#15 <BO.9 2Matches> WardiTV Spring Cup SEL Masters #6 - Solar vs Classic (SC: Evo)
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
Why there arent any 256x256 pro maps? ASL21 General Discussion BGH Auto Balance -> http://bghmmr.eu/ BW General Discussion Pros React To: Leta vs Tulbo (ASL S21, Ro.8)
Tourneys
[ASL21] Ro8 Day 2 [Megathread] Daily Proleagues Escore Tournament StarCraft Season 2 [BSL22] RO16 Group Stage - 02 - 10 May
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Dawn of War IV Nintendo Switch Thread Stormgate/Frost Giant Megathread Daigo vs Menard Best of 10 Diablo IV
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
European Politico-economics QA Mega-thread US Politics Mega-thread Russo-Ukrainian War Thread 3D technology/software discussion Canadian Politics Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1210 users

Ask and answer stupid questions here! - Page 693

Forum Index > General Forum
Post a Reply
Prev 1 691 692 693 694 695 783 Next
Uldridge
Profile Blog Joined January 2011
Belgium5121 Posts
April 24 2018 11:02 GMT
#13841
We are far from the general I guess, but I've found this via a Youtube channel I follow and it shows how AI comes up with solutions that we wouldn't necessarily consider, given a set of instructions (out of the box thinking; creativity; finding loopholes; whatever you want to name it)
The paper in question: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
It's certainly interesting how fast the learning curve for a neural net can be for specific problems and if certain areas are connected, I'm pretty sure a more general neural net that what we already have (even if it's only a small subset of what it can possibly work on) can give good, or even innovative solutions to already existing problems.
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5299 Posts
April 24 2018 12:01 GMT
#13842
On April 24 2018 13:57 Myrddraal wrote:
Show nested quote +
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Acrofales
Profile Joined August 2010
Spain18285 Posts
April 24 2018 12:44 GMT
#13843
On April 24 2018 21:01 xM(Z wrote:
Show nested quote +
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
Epishade
Profile Blog Joined November 2011
United States2267 Posts
April 26 2018 06:38 GMT
#13844
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?
Pinhead Larry in the streets, Dirty Dan in the sheets.
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
April 26 2018 06:49 GMT
#13845
If you can set it up before knowing the choices, you can choose whichever option is first in the alphabet. It is not very random obviously, but should solve that single case problem.
Acrofales
Profile Joined August 2010
Spain18285 Posts
April 26 2018 07:07 GMT
#13846
On April 26 2018 15:38 Epishade wrote:
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?

Come up with any system that is sufficiently complicated that you can't "unconsciously" calculate the outcome, and has the same chance of picking one item or the other. E.g. pick a number n that is greater than 10. If the nth digit of pi is even, pick the left item. Otherwise pick the right item. If you suspect you are familiar enough with pi to be able to cheat, pick a number > 100. Or use the nth digit of e instead.

If you don't have any way of looking up the digits of pi, you can calculate them through tailor expansion of a machin-like formula. Have fun!
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
April 26 2018 07:16 GMT
#13847
Very cool solution!

I think e works better than Pi since the series is easier to calculate in your head.

Also, i just realized, is this question basically trying to figure out how to set up a roleplaying group in Platos cave?
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:33 GMT
#13848
--- Nuked ---
farvacola
Profile Blog Joined January 2011
United States18857 Posts
April 27 2018 19:38 GMT
#13849
"I hadn't noticed."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:38 GMT
#13850
--- Nuked ---
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 27 2018 19:48 GMT
#13851
Thanks, I grew me myself
CUTE MAKES RIGHT
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 27 2018 20:05 GMT
#13852
"No I'm not"
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Uldridge
Profile Blog Joined January 2011
Belgium5121 Posts
April 28 2018 02:53 GMT
#13853
"Tall boys are called men"
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5299 Posts
Last Edited: 2018-04-28 06:12:26
April 28 2018 06:09 GMT
#13854
On April 24 2018 21:44 Acrofales wrote:
Show nested quote +
On April 24 2018 21:01 xM(Z wrote:
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
i'm short on time these days so i don't have time to ramble on this but you seem stuck on the notion of a subservient AI, one which you control either by its make-up/design or by guilt(human goals/ethics/emotions). all i can say here is - ditch your white man issues/complexes, you(as a human) are not the end all, be all.

other than that, regardless of however you design your AI and how many fail safes you add it, there will be a point in which the AI will birth itself into being and be separate from your building constrains. before that, we're talking about a machine we control(it may look smart but it'll still be a machine) and after that point, we'll talk about a being.
(from my pov, you only talk about the former which is not interesting)
To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
The_Templar
Profile Blog Joined January 2011
your Country52797 Posts
Last Edited: 2018-04-28 13:23:10
April 28 2018 13:22 GMT
#13855
On April 28 2018 04:33 JimmiC wrote:
What is the correct response to "boy your tall" or something similar.

I've used "thank you" and "it's true" but neither feels right.

"No, I'm not."

The taller you are, the better this response is.

Alternatively, since that has already been said, "thanks, you too" is also acceptable.
Moderatorshe/her
TL+ Member
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
Last Edited: 2018-04-28 13:32:34
April 28 2018 13:30 GMT
#13856
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 28 2018 13:52 GMT
#13857
On April 28 2018 22:30 GreenHorizons wrote:
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.

"No, no, not at all. I'm just the one on top. William, let's go".
Completely straight-faced, walk away after.
CUTE MAKES RIGHT
Wrath
Profile Blog Joined July 2014
3174 Posts
April 30 2018 20:50 GMT
#13858
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?
Simberto
Profile Blog Joined July 2010
Germany11824 Posts
April 30 2018 21:32 GMT
#13859
Whatever has some kind of decent reviews on amazon. You are not going to get a good headset at that price.
GreenHorizons
Profile Blog Joined April 2011
United States23930 Posts
April 30 2018 21:39 GMT
#13860
On May 01 2018 05:50 Wrath wrote:
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?

Sades are okay, Sentey has some decent ones at that price. Nothing's going to be too great at that price but both those brands have ones that should have ~4 star reviews.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Prev 1 691 692 693 694 695 783 Next
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
2026 GSL S1: Ro12 Group B
CranKy Ducklings106
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RuFF_SC2 207
SpeCial 173
NeuroSwarm 156
ROOTCatZ 10
StarCraft: Brood War
NaDa 59
MaD[AoV]19
Noble 14
Dota 2
monkeys_forever905
League of Legends
Doublelift4629
JimRising 624
Super Smash Bros
Mew2King146
Heroes of the Storm
Khaldor135
Other Games
gofns17356
tarik_tv15113
summit1g7906
PiGStarcraft237
ViBE58
kaitlyn57
Organizations
Other Games
gamesdonequick578
Dota 2
PGL Dota 2 - Main Stream48
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 13 non-featured ]
StarCraft 2
• EnkiAlexander 76
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• RayReign 32
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Lourlo2383
Upcoming Events
RSL Revival
7h 31m
herO vs TriGGeR
NightMare vs Solar
uThermal 2v2 Circuit
11h 31m
BSL
16h 31m
IPSL
16h 31m
eOnzErG vs TBD
G5 vs Nesh
Patches Events
21h 31m
Replay Cast
1d 6h
Wardi Open
1d 7h
Afreeca Starleague
1d 7h
Jaedong vs Light
Monday Night Weeklies
1d 13h
Replay Cast
1d 21h
[ Show More ]
Sparkling Tuna Cup
2 days
Afreeca Starleague
2 days
Snow vs Flash
WardiTV Invitational
2 days
GSL
3 days
Classic vs Cure
Maru vs Rogue
GSL
4 days
SHIN vs Zoun
ByuN vs herO
OSC
4 days
OSC
4 days
Replay Cast
4 days
Escore
5 days
The PondCast
5 days
WardiTV Invitational
5 days
Replay Cast
5 days
CranKy Ducklings
6 days
RSL Revival
6 days
SHIN vs Bunny
ByuN vs Shameless
WardiTV Invitational
6 days
BSL
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Escore Tournament S2: W5
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
KK 2v2 League Season 1
Acropolis #4
SCTL 2026 Spring
RSL Revival: Season 5
2026 GSL S1
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.