• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 03:35
CET 09:35
KST 17:35
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10
Community News
RSL Season 3: RO16 results & RO8 bracket13Weekly Cups (Nov 10-16): Reynor, Solar lead Zerg surge1[TLMC] Fall/Winter 2025 Ladder Map Rotation14Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA14
StarCraft 2
General
SC: Evo Complete - Ranked Ladder OPEN ALPHA RSL Season 3: RO16 results & RO8 bracket RSL Season 3 - Playoffs Preview Mech is the composition that needs teleportation t GM / Master map hacker and general hacking and cheating thread
Tourneys
RSL Revival: Season 3 $5,000+ WardiTV 2025 Championship StarCraft Evolution League (SC Evo Biweekly) Constellation Cup - Main Event - Stellar Fest 2025 RSL Offline Finals Dates + Ticket Sales!
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 500 Fright night Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened
Brood War
General
2v2 maps which are SC2 style with teams together? Data analysis on 70 million replays soO on: FanTaSy's Potential Return to StarCraft BGH Auto Balance -> http://bghmmr.eu/ A cwal.gg Extension - Easily keep track of anyone
Tourneys
[BSL21] RO16 Tie Breaker - Group B - Sun 21:00 CET [BSL21] RO16 Tie Breaker - Group A - Sat 21:00 CET [Megathread] Daily Proleagues Small VOD Thread 2.0
Strategy
Current Meta Game Theory for Starcraft How to stay on top of macro? PvZ map balance
Other Games
General Games
Clair Obscur - Expedition 33 Path of Exile Stormgate/Frost Giant Megathread EVE Corporation [Game] Osu!
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread The Games Industry And ATVI Things Aren’t Peaceful in Palestine About SC2SEA.COM
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread Korean Music Discussion
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
The Health Impact of Joining…
TrAiDoS
Dyadica Evangelium — Chapt…
Hildegard
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2030 users

Ask and answer stupid questions here! - Page 693

Forum Index > General Forum
Post a Reply
Prev 1 691 692 693 694 695 783 Next
Uldridge
Profile Blog Joined January 2011
Belgium4967 Posts
April 24 2018 11:02 GMT
#13841
We are far from the general I guess, but I've found this via a Youtube channel I follow and it shows how AI comes up with solutions that we wouldn't necessarily consider, given a set of instructions (out of the box thinking; creativity; finding loopholes; whatever you want to name it)
The paper in question: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
It's certainly interesting how fast the learning curve for a neural net can be for specific problems and if certain areas are connected, I'm pretty sure a more general neural net that what we already have (even if it's only a small subset of what it can possibly work on) can give good, or even innovative solutions to already existing problems.
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5296 Posts
April 24 2018 12:01 GMT
#13842
On April 24 2018 13:57 Myrddraal wrote:
Show nested quote +
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Acrofales
Profile Joined August 2010
Spain18131 Posts
April 24 2018 12:44 GMT
#13843
On April 24 2018 21:01 xM(Z wrote:
Show nested quote +
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
Epishade
Profile Blog Joined November 2011
United States2267 Posts
April 26 2018 06:38 GMT
#13844
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?
Pinhead Larry in the streets, Dirty Dan in the sheets.
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
April 26 2018 06:49 GMT
#13845
If you can set it up before knowing the choices, you can choose whichever option is first in the alphabet. It is not very random obviously, but should solve that single case problem.
Acrofales
Profile Joined August 2010
Spain18131 Posts
April 26 2018 07:07 GMT
#13846
On April 26 2018 15:38 Epishade wrote:
Assume you have to make a decision between two choices. Both choices are equally appetizing to you so you can't decide between them no matter what based on opinion alone. You need a random number generator to tell you which choice to take. However, you are alone and have no coins or other items to flip or random number generators to run or anything else to help you make this decision. How do you create your own random number generator (aka random decision maker) without using any items?

Come up with any system that is sufficiently complicated that you can't "unconsciously" calculate the outcome, and has the same chance of picking one item or the other. E.g. pick a number n that is greater than 10. If the nth digit of pi is even, pick the left item. Otherwise pick the right item. If you suspect you are familiar enough with pi to be able to cheat, pick a number > 100. Or use the nth digit of e instead.

If you don't have any way of looking up the digits of pi, you can calculate them through tailor expansion of a machin-like formula. Have fun!
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
April 26 2018 07:16 GMT
#13847
Very cool solution!

I think e works better than Pi since the series is easier to calculate in your head.

Also, i just realized, is this question basically trying to figure out how to set up a roleplaying group in Platos cave?
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:33 GMT
#13848
--- Nuked ---
farvacola
Profile Blog Joined January 2011
United States18839 Posts
April 27 2018 19:38 GMT
#13849
"I hadn't noticed."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
JimmiC
Profile Blog Joined May 2011
Canada22817 Posts
April 27 2018 19:38 GMT
#13850
--- Nuked ---
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 27 2018 19:48 GMT
#13851
Thanks, I grew me myself
CUTE MAKES RIGHT
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
April 27 2018 20:05 GMT
#13852
"No I'm not"
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Uldridge
Profile Blog Joined January 2011
Belgium4967 Posts
April 28 2018 02:53 GMT
#13853
"Tall boys are called men"
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5296 Posts
Last Edited: 2018-04-28 06:12:26
April 28 2018 06:09 GMT
#13854
On April 24 2018 21:44 Acrofales wrote:
Show nested quote +
On April 24 2018 21:01 xM(Z wrote:
On April 24 2018 13:57 Myrddraal wrote:
On April 08 2018 15:43 xM(Z wrote:
On April 07 2018 23:15 Simberto wrote:
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.


It sounds like Simberto has read or listened to some of Eliezer Yudkowsky's work, because the paperclip maximiser is his example of how he thinks "artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity".

The artificial intelligence that you are referring to would be further in the future I think, whereas the paperclip maximizer is an example that would be more likely to happen in the nearer future. Eliezer's caution is that if the alignment problem (how can we make sure the AI's goals will align with ours or be to our benefit) is not solved, which he thinks could take an extra 2-3 years longer than creating general AI that is not properly aligned, that is where he thinks the danger lies.

I think the most important definitions that people need to be aware of when discussing AI is the difference between general and specific intelligence. From what I have heard we are still quite far away from achieving powerful general AI, and we don't have a lot to fear from specific AI (such as those that mastered GO etc). What I'd be worried about is a powerful general AI that has access to or is able to create specific AI, and is not correctly aligned with our (human) goals.
for some reason i can't see humans and the AI being contemporary, sharing the same physical space/resources.

i'm on the side of punctuated equilibria visavis phyletic gradualism. + Show Spoiler +
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual.[1] When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
vs
... contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another.
(quantum evolution model looks to be in its infancy; plus, i put to much value on that 'period of stasis' to give q.ev. to much credit for now)

so, when the AI splits(whatever that will mean), it'll be gone; zoom, zoom through universes using fungal networks.

i can't see a higher being having 'human goals'; it's like you having/be limited at ant(random) goals: take care of the colony, die ... the end?(i don't know what else they have going there).


I don't really know why evolution comes into this at all. We are the ones designing the AI, so we are the ones who decide what goals to give it. It would be exceptionally stupid of us to create an AI that doesn't have "human goals". That is not to say any kind of "general AI" would be safe.

Even with human goals plenty can go wrong. Right now, it is Assad's very human goal to completely dominate the rebels in his country (and similarly the rebels have the human goal to topple Assad's government). Human goals and altruism are not the same thing, and a competent AI tasked with eradicating some mad dictators enemies could be a very dangerous thing, even if it is completely under control and designed to be entirely obedient to "human goals". Moreover, Asimov has written books and books and books on how even the most basic "altruistic" rules can break down and cause catastrophe. If you ask me, our own ethical framework is not developed enough to even know what rules we should want to have govern a general AI. That is kinda ok, because we're also quite far away from the capability of creating a general AI, but it is something we need to be thinking about (and luckily, we are).

But... back to "evolution" of AI, even as an accident, it seems unlikely. Evolution happens when things reproduce (with error). Now we cannot possibly stop errors from happening in reproduction, but it should be fairly trivial to not have them reproduce in the first place. Of course, this would be a legal framework, and not a technological one: if we are capable of creating general AI, we are capable of giving that same AI the means of backing itself up, making copies of itself, or what have you. It would require some serious police work to ensure nobody does that. Probably something similar to the IAEA, but for AIs. Because I do agree with you that if a general AI can reproduce and evolve, it will no doubt, at some point, consider us as competitors in some way or another, and act accordingly.
i'm short on time these days so i don't have time to ramble on this but you seem stuck on the notion of a subservient AI, one which you control either by its make-up/design or by guilt(human goals/ethics/emotions). all i can say here is - ditch your white man issues/complexes, you(as a human) are not the end all, be all.

other than that, regardless of however you design your AI and how many fail safes you add it, there will be a point in which the AI will birth itself into being and be separate from your building constrains. before that, we're talking about a machine we control(it may look smart but it'll still be a machine) and after that point, we'll talk about a being.
(from my pov, you only talk about the former which is not interesting)
To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
The_Templar
Profile Blog Joined January 2011
your Country52797 Posts
Last Edited: 2018-04-28 13:23:10
April 28 2018 13:22 GMT
#13855
On April 28 2018 04:33 JimmiC wrote:
What is the correct response to "boy your tall" or something similar.

I've used "thank you" and "it's true" but neither feels right.

"No, I'm not."

The taller you are, the better this response is.

Alternatively, since that has already been said, "thanks, you too" is also acceptable.
Moderatorshe/her
TL+ Member
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
Last Edited: 2018-04-28 13:32:34
April 28 2018 13:30 GMT
#13856
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Dark_Chill
Profile Joined May 2011
Canada3353 Posts
April 28 2018 13:52 GMT
#13857
On April 28 2018 22:30 GreenHorizons wrote:
The best one I got was when I was just a wee little tyke and he bent over and told me he was actually two shorter people stacked on top of each other, and then "shhh"d me and told me not to tell anyone.

I think that would be exponentially better when told to adults.

The more deadpan the better.

"No, no, not at all. I'm just the one on top. William, let's go".
Completely straight-faced, walk away after.
CUTE MAKES RIGHT
Wrath
Profile Blog Joined July 2014
3174 Posts
April 30 2018 20:50 GMT
#13858
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?
Simberto
Profile Blog Joined July 2010
Germany11640 Posts
April 30 2018 21:32 GMT
#13859
Whatever has some kind of decent reviews on amazon. You are not going to get a good headset at that price.
GreenHorizons
Profile Blog Joined April 2011
United States23489 Posts
April 30 2018 21:39 GMT
#13860
On May 01 2018 05:50 Wrath wrote:
Hi, what good computer headset/headphone do you recommend for $15-$20 budget?

Sades are okay, Sentey has some decent ones at that price. Nothing's going to be too great at that price but both those brands have ones that should have ~4 star reviews.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Prev 1 691 692 693 694 695 783 Next
Please log in or register to reply.
Live Events Refresh
RSL Revival
07:30
Playoffs
Maru vs SHIN
herO vs TBD
Crank 988
Tasteless742
IndyStarCraft 141
CranKy Ducklings102
Rex93
3DClanTV 60
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Crank 1010
Tasteless 905
IndyStarCraft 154
Rex 94
StarCraft: Brood War
Britney 48560
Sea 3801
Horang2 1231
Larva 683
PianO 176
Killer 142
Stork 95
Leta 80
soO 67
ToSsGirL 65
[ Show more ]
Dewaltoss 58
Sharp 54
yabsab 37
Noble 34
Bale 24
Sacsri 20
Hm[arnc] 15
NotJumperer 6
Purpose 6
Dota 2
monkeys_forever542
NeuroSwarm126
Other Games
summit1g13220
fl0m392
ViBE132
Organizations
Dota 2
PGL Dota 2 - Main Stream2013
StarCraft: Brood War
lovetv 9
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• LUISG 10
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Stunt822
Upcoming Events
Wardi Open
5h 25m
IPSL
11h 25m
StRyKeR vs OldBoy
Sziky vs Tarson
BSL 21
11h 25m
StRyKeR vs Artosis
OyAji vs KameZerg
OSC
14h 25m
OSC
1d
Wardi Open
1d 3h
Monday Night Weeklies
1d 8h
OSC
1d 14h
Wardi Open
2 days
Replay Cast
3 days
[ Show More ]
Wardi Open
3 days
Tenacious Turtle Tussle
3 days
The PondCast
4 days
Replay Cast
4 days
LAN Event
5 days
Replay Cast
5 days
Replay Cast
6 days
Liquipedia Results

Completed

Proleague 2025-11-21
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
CSCL: Masked Kings S3
SLON Tour Season 2
RSL Revival: Season 3
META Madness #9
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.