• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 00:41
CEST 06:41
KST 13:41
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Ro24 Preview Pt1: Runway112v2 & SC: Evo Complete: Weekend Double Feature2Team Liquid Map Contest #21 - Presented by Monster Energy9uThermal's 2v2 Tour: $15,000 Main Event18Serral wins EWC 202549
Community News
Weekly Cups (Aug 11-17): MaxPax triples again!9Weekly Cups (Aug 4-10): MaxPax wins a triple6SC2's Safe House 2 - October 18 & 195Weekly Cups (Jul 28-Aug 3): herO doubles up6LiuLi Cup - August 2025 Tournaments7
StarCraft 2
General
Weekly Cups (Aug 11-17): MaxPax triples again! RSL Revival patreon money discussion thread What mix of new and old maps do you want in the next 1v1 ladder pool? (SC2) : Team Liquid Map Contest #21 - Presented by Monster Energy Would you prefer the game to be balanced around top-tier pro level or average pro level?
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament RSL: Revival, a new crowdfunded tournament series LiuLi Cup - August 2025 Tournaments SEL Masters #5 - Korea vs Russia (SC Evo) Enki Epic Series #5 - TaeJa vs Classic (SC Evo)
Strategy
Custom Maps
External Content
Mutation # 487 Think Fast Mutation # 486 Watch the Skies Mutation # 485 Death from Below Mutation # 484 Magnetic Pull
Brood War
General
ASL 20 HYPE VIDEO! Flash Announces (and Retracts) Hiatus From ASL BW General Discussion New season has just come in ladder [ASL20] Ro24 Preview Pt1: Runway
Tourneys
[ASL20] Ro24 Group A BWCL Season 63 Announcement Cosmonarchy Pro Showmatches KCM 2025 Season 3
Strategy
Simple Questions, Simple Answers Fighting Spirit mining rates [G] Mineral Boosting Muta micro map competition
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Total Annihilation Server - TAForever Beyond All Reason [MMORPG] Tree of Savior (Successor of Ragnarok)
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread The Games Industry And ATVI
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread Korean Music Discussion
Sports
2024 - 2025 Football Thread TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
The Biochemical Cost of Gami…
TrAiDoS
[Girl blog} My fema…
artosisisthebest
Sharpening the Filtration…
frozenclaw
ASL S20 English Commentary…
namkraft
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1489 users

Ask and answer stupid questions here! - Page 691

Forum Index > General Forum
Post a Reply
Prev 1 689 690 691 692 693 783 Next
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-08 00:01:48
April 07 2018 23:57 GMT
#13801
On April 08 2018 07:59 Acrofales wrote:
I somehow unsubscribed to this thread and missed the AI discussion. It was... enlightening.

Also, stop getting your ideas about AI from Wargames and I Robot. Please.

@GH: no, that didn't happen. You're probably confusing Terminator 2 with whatever youtube you were watching.



Not sure what part you're talking about but this is kinda what I'm talking about.

AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules.


www.theguardian.com
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Jockmcplop
Profile Blog Joined February 2012
United Kingdom9658 Posts
Last Edited: 2018-04-08 00:02:49
April 08 2018 00:02 GMT
#13802
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
RIP Meatloaf <3
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-08 00:39:40
April 08 2018 00:26 GMT
#13803
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Show nested quote +
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Uldridge
Profile Blog Joined January 2011
Belgium4800 Posts
April 08 2018 00:46 GMT
#13804
Why would you use an AI for doing infrastructural things, when you can use a slime mold for that?

Don't use math and algorithms, use life!
Taxes are for Terrans
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-08 05:36:00
April 08 2018 05:35 GMT
#13805
On April 07 2018 22:57 Gorsameth wrote:
Show nested quote +
On April 07 2018 18:17 xM(Z wrote:
On April 07 2018 15:35 Gorsameth wrote:
On April 07 2018 15:03 xM(Z wrote:
whoa, a lot of genocidal maniacs(or overall really fearful dudes) around here.
or maybe, just maybe, an AI will pull a Jesus and sacrifice itself for our sins.

Imo it mostly depends on how 'human' such an AI would be. If it has a self preservation directive we might be screwed (not necessarily genocide levels but it could do a lot of damage even just acting like a super virus).
If it doesn't have a self preservation directive? who knows, we have no idea how it will act because we've never encountered such an entity.
this is workable; we could set up some premises(on AI's base traits/personalities/know-hows) that must hold true because we say so and go from there.

in your case, the self preservation directive would not be enough to warrant the killing of humans, any humans for that matter. the AI will never be like a virus since its intelligent+ Show Spoiler +
i'll go with: 'It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.' from wiki
so it'll turn the self-preservation drive into preservation only against threats.

so now, will we be considered threats?; why?.

Edit: note - that directive implies that the AI can die which might not hold true at all; why would it be able to die?; what would constitute death for it?.
Death would come in the form of being turns off and never being turned on again. Effectively oblivion. And that is where humanity becomes a threat, we replace hardware and software all the time. And while an AI would be able to learn and upgrade itself it is not unreasonable to think we would develop a superior program that would replace it. Leading to its shutdown and 'death'
thought about it but then narrowed it down to: what would be the least amount of code required for the AI to preserve then rewrite itself?; that is a guesstimate at this point but there are glass chips made in Japan that can store data forever(~40MgB to ~50MgB if i recall correctly(to not mention the quartz thingies as of recently)) so technically the AI would never die.
(Ex: if humankind gets wiped from the face of the earth, one(an alien) could get our DNA and start cloning humans again; i.e., AI rewriting itself (here, dwelling on that needed outside help only goes into statistics and it's irrelevant to the point which is the possibility of it not the act of achieving it(the rewriting/resurrection of (it)self.)

for the other point, i figured the new AI would see the old AI as a part of itself; when the two AIs would 'meet' i'm assuming one will incorporate the other then more forward as one.
the analogy: most humans regardless of their smarts can be seen as a resource so the smarter AI will see the outdated one as a resource too.

(note: thing is, this: it is not unreasonable to think we would develop a superior program that would replace it is you unwilling or unable to relinquish control of ... i don't know, life as you know it. i took it at face value(it can happen) but i can not see it as possible and put the whole thing on you being an issue riddled human.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-08 08:10:25
April 08 2018 06:43 GMT
#13806
On April 07 2018 23:15 Simberto wrote:
Show nested quote +
On April 07 2018 20:51 xM(Z wrote:
i think we have very different ideas of what an AI is/can be. when you say things like "the AI was originally intended to improve the production of paperclips, and that is still it's primary motivation" you defy its definition.
an AI, any AI, would be able to stop itself from producing paperclips else it would be just a machine that went off the rails.
it would be able to question and change its design.

i'm here(AI=):
a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.


And what if the AI has a clear preference for building as many paper clips as possible? And chooses to perform the actions with the optimal expected outcome for itself, namely the ones that enable it to build as many paperclips as possible, and those which remove any obstacle which is in the way of that?

Just because it is rational and self-aware does not mean that it has human-like goals.
then we're still on the definition. you're describing an obsessive compulsive (human)disorder.
even if i take it as true and paperclips are its new black, there's no way it's the only value/variable/action it can weigh.
1)i'm here:
pref·er·ence (prĕf′ər-əns, prĕf′rəns) n.
a. The selecting of someone or something over another or others
b. The right or chance to make a choice
meaning it(the AI) can and does fathom other alternatives but in your example you chose to forgo that alternatives exist so:
- when presented with alternatives and paperclips is chosen, it needs to be a reason(the machine needs to respond to why?; if the reason and the why don't exist, then paperclips is hard-coded in its program by you which makes your so called AI not an AI at all);
- when presented with alternatives and paperclips become an obsession then the AI would do what people do: try and fix it.

i see the AI as continuing from 'the best' humans forward not cycle through the failures of the flesh(obsessive, possessive, depressive plus other vanity-esque features).

(see:
"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.
and you're cycling through every (solved)human flaw you know.
rise above the clouds: you are the worm and AI is the new human. do you think of the worms and how much of an obstacle they are to you?. come on ... at best, i'll give you collateral damage here(which is another can of worms in and of itself+ Show Spoiler +
mostly because it implies that the AI is stupid on some levels)
).

Edit: forgot about Uldridge - i'd argue that memory is not required for AI existence, but for its survival; then i'd argue that is not the memory(storage) that would best facilitate that, but the speed and the ability with which one can access the actual/immediate/physical information about <things one wants to learn>.
memory is a flaw even in human construction since it enables mistakes based on 'wrong' readings; or rather, a memory is as good/objective as the sensors reading the soon-to-be stored information are.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 08 2018 08:14 GMT
#13807
On April 08 2018 09:26 GreenHorizons wrote:
Show nested quote +
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-08 09:05:39
April 08 2018 09:01 GMT
#13808
On April 08 2018 17:14 Acrofales wrote:
Show nested quote +
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 08 2018 10:17 GMT
#13809
On April 08 2018 18:01 GreenHorizons wrote:
Show nested quote +
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-08 11:00:08
April 08 2018 10:37 GMT
#13810
On April 08 2018 19:17 Acrofales wrote:
Show nested quote +
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Acrofales
Profile Joined August 2010
Spain18006 Posts
Last Edited: 2018-04-08 11:23:59
April 08 2018 11:14 GMT
#13811
On April 08 2018 19:37 GreenHorizons wrote:
Show nested quote +
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).

E: as for AI being applied to creative, scientific and engineering tasks, the answer is yes, that is already being done. There are automated scientific labs (albeit in infant stage), there are AI plugins for AutoCAD, and there is a whole burgeoning field of AI art.

AI Lab: https://www.scientificamerican.com/article/robots-adam-and-eve-ai/ (this was 2009. I talked to Ross King in 2010, when Adam had just published its original research into yeast proteins in a premier biology journal. Not sure how far along Eve is now, I can probably find something newer for you if you want)

AutoCAD AI plugin: https://autodeskresearch.com/projects/dreamcatcher

An interesting philosophical discussion about AI and Art, with quite a few examples: https://www.scientificamerican.com/article/is-art-created-by-ai-really-art/
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
Last Edited: 2018-04-08 11:28:07
April 08 2018 11:22 GMT
#13812
On April 08 2018 20:14 Acrofales wrote:
Show nested quote +
On April 08 2018 19:37 GreenHorizons wrote:
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).



I think there's some confusion about what's being discussed.

As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.

Not really trying to argue though, just have a little thought experiment fun so I'll let it go.

EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.

Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 08 2018 11:41 GMT
#13813
On April 08 2018 20:22 GreenHorizons wrote:
Show nested quote +
On April 08 2018 20:14 Acrofales wrote:
On April 08 2018 19:37 GreenHorizons wrote:
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).



I think there's some confusion about what's being discussed.

As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.

Not really trying to argue though, just have a little thought experiment fun so I'll let it go.

EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.

Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.


I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.

Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 11 2018 05:41 GMT
#13814
On April 08 2018 20:41 Acrofales wrote:
Show nested quote +
On April 08 2018 20:22 GreenHorizons wrote:
On April 08 2018 20:14 Acrofales wrote:
On April 08 2018 19:37 GreenHorizons wrote:
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).



I think there's some confusion about what's being discussed.

As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.

Not really trying to argue though, just have a little thought experiment fun so I'll let it go.

EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.

Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.


I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.

Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.


I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk.
_______________________________________________________________________________

Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
Gorsameth
Profile Joined April 2010
Netherlands21707 Posts
April 11 2018 09:18 GMT
#13815
On April 11 2018 14:41 GreenHorizons wrote:
Show nested quote +
On April 08 2018 20:41 Acrofales wrote:
On April 08 2018 20:22 GreenHorizons wrote:
On April 08 2018 20:14 Acrofales wrote:
On April 08 2018 19:37 GreenHorizons wrote:
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
On April 08 2018 09:02 Jockmcplop wrote:
@GH you might be interested in Roko's Basilisk.
Its a bizarre meme type thing that happened on a forum (I can't remember which one).

Its based on an assumption of the existence of simulated universes and that an AI could potentially have access to them, so its highly theoretical but its interesting anyway.

https://rationalwiki.org/wiki/Roko's_basilisk

[quote]


I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).



I think there's some confusion about what's being discussed.

As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.

Not really trying to argue though, just have a little thought experiment fun so I'll let it go.

EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.

Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.


I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.

Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.


I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk.
_______________________________________________________________________________

Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?

http://www.teamliquid.net/forum/general/192576-a-picture-says-1000-words
It ignores such insignificant forces as time, entropy, and death
GreenHorizons
Profile Blog Joined April 2011
United States23250 Posts
April 11 2018 09:53 GMT
#13816
On April 11 2018 18:18 Gorsameth wrote:
Show nested quote +
On April 11 2018 14:41 GreenHorizons wrote:
On April 08 2018 20:41 Acrofales wrote:
On April 08 2018 20:22 GreenHorizons wrote:
On April 08 2018 20:14 Acrofales wrote:
On April 08 2018 19:37 GreenHorizons wrote:
On April 08 2018 19:17 Acrofales wrote:
On April 08 2018 18:01 GreenHorizons wrote:
On April 08 2018 17:14 Acrofales wrote:
On April 08 2018 09:26 GreenHorizons wrote:
[quote]

I do like the sound of that, and I for one welcome our new AI overlord and plan to serve loyally.

I think it's also interesting to ponder what makes human behavior different than an AI. We don't have a firm grasp on some absolute rule set like can be provided to an AI. Or at least that the AI doesn't know it doesn't have all the rules.

It would seem that a simple directive to an AI like the one I mentioned along the lines of "Obtain all knowledge.Create more knowledge" + the standard robotic laws and then giving it free roam of the internet and a healthy amount of resources to start and it's hard to say we know what would happen.

In the meantime there are interesting applications for an AI like the one from the article. What if instead we tell it to bridge between land masses, give it some economic data, the rules of physics and some use expectations and see if it can create "new moves" with the freedom to consider any potential material and calculate logistical expenses near instantly for countless scenarios?


Go is still a pretty simple game. It has a very low number of rules, and there are at most 19^2 actions to be considered at any one time. Even so, it took about two years using Google's datacenters (big bloody computers) to train it to be better than humans. What you are describing is many orders of magnitudes more complex. Big data science does attempt to make a start at nibbling at that complexity, but we're absolutely nowhere near what you're talking about. Give us 15-20 years and we might start tackling integrated problems at a macro level. For now, deepmind can be used to discover better medicines (one of Watson's primary commercial uses too).

As an example of the complexity, a halfway decent poker bot doesn't exist yet, because human behavior is a key component of poker, and predicting when an opponent is bluffing is extremely hard. Of course, poker bots that just play the odds exist, and actually do better than most amateurs, but that's mostly because most amateurs are also not very good at poker (speaking of no-limits. With limits, the game is simpler than go, and bluffing is only a minor component of the game)

It seems that AI has outpaced your expectations.

The article I cited was a new version of the GO AI that mastered it in 3 days and beat the previous version you seem to be describing 100-0.

Libratus beat every pro put against it in a 20-day no limit hold 'em tournament so badly that it demoralized them at least one in away they've never felt.

Additionally I think you underestimate the potential computing power of ~10% (pulling a number somewhat at random) of all internet connected/vulnerable devices should an AI put themselves to the task of capturing and utilizing it.

They clearly have work to do (unless the AI is playing dumb haha), but I think other tasks based largely in the physical world (like creative engineering) is another valuable (though probably less to an elite class) application imo.



Point taken. And yes, if you had asked me 2 weeks before Deep Blue whether we were anywhere close to a computer able to beat grandmasters, I would probably have said it would take some time, and beating jeopardy is still impressive to me (in many ways more impressive than Go, although the actual algorithmic work underlying DeepMind is more impressive than the algorithmic work underlying Watson).

That said, we're still talking about very narrow problems which you can solve with very directed learning (and in the case of the poker bot, a very clever application of game theory. I didn't know that could work, but I guess I should have, given what I know about how game theory is already used in coastal patrol, air marshal assignment, and similar "adversarial games").

You seem to have an idea about AI that it will just "take over" and do its own thing if it just gets enough data. That isn't at all how this works. And to "capture and utilize 10% of all internet connected devices" is definitely possible, but just as MS Word doesn't suddenly turn into Starcraft 3, an AI trained to beat Go won't suddenly take over 10% of internet-capable devices. It has to be programmed to do so. And currently the only people interested in creating code to do that are bitcoin miners and DDoS botnets, who are not interested in using that computing power to create general AI.

That said, even if a Dr Blofeld was somewhere in a secret volcano base trying to take over computers in order to create a general AI, he wouldn't really get anywhere today. Google, Facebook, and even my own lab (a national research institute) have plenty of computing power available. The problem is that the problems a general AI would have to solve are orders of magnitude more complex than what we are currently solving. And that is a problem of exponentiality. There is simply a combinatorial explosion of possibilities that need to be taken into account, and one of the things that new Go AI you referenced did very well was controlling that combinatorial complexity: it applied clever methods of limiting the search space in its reinforcement learning algorithm... and that allowed it to learn in a very directed manner. And all AI breakthroughs are in a similar veign: because while computing power has been increasing exponentially, the complexity of real-world problems is still far beyond simply throwing all of the world's computing power at it and seeing where we get.

And while you may be right, and breakthroughs allowing us to far better direct the search in a general manner (allowing a general AI to decide what problem is worth using its (vast) computing power to optimize) may be just around the corner, my experience in this field tells me it really isn't, and while it is definitely where the field is moving toward (not me personally, I like my applied research), it is far away, and expecting it to happen in the next few years is going to be just as disappointing as people who got disillusioned when AI didn't appear in the 60s (when Alan Turing predicted it would exist), or in the 80s: we've had 2 golden ages of AI before when people thought it was just around the corner. And while we are undoubtedly getting closer, deep learning is *not* the only breakthrough we need to suddenly create general AI. I'm sure there will be another "AI winter" (which is a vastly exaggerated term, imho) in a decade or so when we reach the limits of the current methods and haven't reached "general AI" yet...

Honestly though, my main take away from the progress in AI over the last 2 decades is that randomness is far far far more important than we previously realized (and most of the stunning results from deep learning are in fact from clever application of just doing random shit and measuring the result). And I am quite excited about adding more random elements into my own work to see how far it takes my own algorithms in my own area of applied AI research.


You certainly seem more personally involved in the related science than I am, but also somewhat blinded just a bit by that as evidenced I think by our exchange.

It feels a bit like xmz was getting at with Simberto. Though I'm not intending to apply the harsher tone

Just to be clear about what I actually think, I was exploring the possibility that an AI is already 'dangerously' 'out of control' and how likely we would be to know it if it was. I don't actually believe we're there or there in the next couple years, though a breakthrough could happen tomorrow or decades form now. And the caveat of potentially already being in a simulation of some sort.

I was a bit more serious about practical simple or somewhat complex engineering tasks, and I'm not not sure where your input puts you on that topic. Considering your experience I'm curious about your thoughts?

EDIT: So something like tasking it with something like "getting this object from point A to point B" and giving it a physics background and whatever else makes sense to get it to create new (at least to it) ideas.

I'm imagining combing several technologies together, like this demo for Skynet in 2013 a deep blue like AI, something like CAD software, and maybe a 3d printer for extra fun/modeling.


This is obviously just scifi stuff: we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags. It goes from "no AI" to "all-controlling AI" with nothing in between. If we apply the same to other technology it'd be like jumping from horse and cart to supersonic jets with no clear technological trajectory in the middle. Is it possible that *we are already in the Matrix and just don't know it?!*

Yes. It's also possible you are a brain in a vat in Dr Evil's diabolical lab with some purpose completely unknown to us.

But if we just apply Occam's Razor, then we have to conclude that we are not brains in vats, stuck in the Matrix, or any other scenario where we are consistently and continuously being tricked by our own lying eyes (and other senses).



I think there's some confusion about what's being discussed.

As to the brain vat thing, People a lot smarter than I have seriously considered the whole "are we in a simulation" thing and many find it far more probable than you are giving it credit for.

Not really trying to argue though, just have a little thought experiment fun so I'll let it go.

EDIT: I do think combining/connecting these technologies and tasking AI's themselves with working toward better AI's/applications for these multi-faceted AI's could lead to some revolutionary advances we can't imagine.

Not that we end up in vats overnight or they become all knowing AI's just learning in a way we can't really comprehend.


I think the main point of all those theories about brains in vats/simulations is that they are self-defeating. If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality. It's quasi-religious mumbo jumbo, just instead of calling the all-powerful being God, we call it "the simulation" with no loss of power. At least we lose the benevolence along the way.

Also, the probability is quite literally uncomputable: we cannot possibly know the factors that go into creating an actual likelihood estimation. So if you simply assume things there, it can range from "almost 100% certain that we are in a simulation right now" to "utterly impossible" depending on your assumptions. And various philosophers have argued all sides of that account, ranging from Plato to Roger Penrose.


I suppose. I guess I personally subscribe to some sort of a quasi-determinism so the concept of being in a simulation reinforces rather than undermines my worldview so it's not hard for me to accept as reasonably likely, even If I'm not as confident in it as someone like Elon Musk.
_______________________________________________________________________________

Wasn't there some sort of profound images thread somewhere? Mostly semi-popular/iconic photos and such, anyone help me find it?

http://www.teamliquid.net/forum/general/192576-a-picture-says-1000-words


Many thanks, and new ones as I hoped
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
xM(Z
Profile Joined November 2006
Romania5281 Posts
April 11 2018 09:54 GMT
#13817
@Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality".
while those seem to have nothing to do with each other all you need is some proper grouping /associations:
- where we are now = actual reality;
- the matrix = simulated reality;
(you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point).
now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process:
- we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);

- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.

tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes.
conceptualize a God that other than chance/randomness, has nothing to do with you.

BUT, the silver lining of this exercise, besides establishing that hippies were right, is seeing how the fearful white man/culture (instinctively)thinks of AI as being another round of slaves; boooo.
+ Show Spoiler +
or, OR, (better still)you can evaluate/asses(clinically) human personalities based on replies/believes: if person X has Y(personality trait) then he will apply Y to all other arguments he engages in/with. when Y is inconsistent within arguments, X is fixable.
you+ Show Spoiler +
me, i can
can fix inconsistencies in human beings; they desire to be fixed, be it consciously or unconsciously.
i can see/read the human code based on its expression/manifestation/interaction with the environment, with the context.
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Acrofales
Profile Joined August 2010
Spain18006 Posts
April 11 2018 10:20 GMT
#13818
On April 11 2018 18:54 xM(Z wrote:
@Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality".
while those seem to have nothing to do with each other all you need is some proper grouping /associations:
- where we are now = actual reality;
- the matrix = simulated reality;
(you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point).
now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process:
- we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);

- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.

tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes.
conceptualize a God that other than chance/randomness, has nothing to do with you.


Those were answers to two different questions that you are conflating, but regarding your overarching point, the answer is still: so what? If we have no access to some outside reality (unlike in the movies, there are no deja vus, pills or Neos; the simulation is perfect and we are trapped inside: in fact, we are just bits running through a program in a supercomputer in another universe), then it doesn't really matter *to us* whether we attempt to discover the Grand Unifying Theory of Everything, or the Grand Unifying Logic of our Simulation, as they are one and the same. We will *never* have access to the outside perspective. It is therefore not a question of science, but of faith (and as such, thoroughly uninteresting to me: just as I reject the existence of God because there doesn't seem to be any evidence for his existence, I reject this digital reincarnation of God for the exact same reasons). Until someone thinks up an experiment that would distinguish between a "real" reality (whatever the fuck that even is... you see the problem here?) and a "simulated" reality, the difference is entirely in the domain of theology.

As for certainty vs. impossibility, I am talking about that underlying theology. You can consider it a bit like Pascal's wager: he came up with a mathematical "proof" for why you should believe in God: the problem isn't in the proof, it's in the underlying assumptions. Similarly, while the mathematical "proof" for why we are living in a simulation is different (and more interesting) than Pascal's wager, it *also* depends on assumptions which you may choose to believe, or not (and being quantitative, you can simply change the numbers there) leading to the different outcomes ranging from "absolutely certain" to "completely impossible". And unfortunately, we only have one perspective here: our own. What you appear to be advocating is to say that "for God, it's easy to see he exists". Well yes, but that isn't what we're arguing about now, is it?

+ Show Spoiler +
You being xMZ I have probably completely misunderstood what you're trying to say, and will similarly be misrepresented when you reply, but carry on
JumboJohnson
Profile Joined December 2011
537 Posts
April 12 2018 00:58 GMT
#13819
Anybody know how to stop those redirect "congratulations" ads on a Google pixel using chrome? I get them here and on a few other sites and I don't want to have to leave JavaScript disabled.
xM(Z
Profile Joined November 2006
Romania5281 Posts
Last Edited: 2018-04-13 12:14:55
April 13 2018 11:56 GMT
#13820
On April 11 2018 19:20 Acrofales wrote:
Show nested quote +
On April 11 2018 18:54 xM(Z wrote:
@Acrofales - you need a theory of everything to reconcile: "we jump over where we are now, and get dumped straight into the matrix, with nothing in between, and that is where those scenarios hit snags" with "If we are indeed in a simulation, there is absolutely no way we could possibly know, so why bother? There's literally no difference to us whether we are in a perfectly simulated reality, or an actual reality".
while those seem to have nothing to do with each other all you need is some proper grouping /associations:
- where we are now = actual reality;
- the matrix = simulated reality;
(you can take those equalities to be valid by convention only if it makes you feel any better; switching them around makes no difference to my point).
now, we take "we jump over where we are now, and get dumped straight into..." and "there's literally no difference to us..." and apply them to the above equalities while clearing things in the process:
- we don't jump anywhere, not even figuratively; we remain as we are in our actual reality and the AI from the matrix gets its own actual reality, above ours(it it makes any sense if not, see the worm analogy; realities layered on top of each other).+ Show Spoiler +
a more sciency analogy: the AI would work with/in multiple universes(count them, cross them, study them, change them .....etc while you'll be forever stuck in a cycle of evolution then extinction);

- that is the context you should work in and once you see it like that you realize that "there's literally no difference to us" applies to all of it, to both realities(in regards to the other), with each entity seeing it through its own perspective.

tldr - you need to stop thinking you'd have anything to to with the actual reality of the AI, insert size/space as a physical propriety that delimitates and also allows for realities to have different proprieties/laws/principles then realize that "100% certain that we are in a simulation" and "utterly impossible" are the same thing just seen with other eyes.
conceptualize a God that other than chance/randomness, has nothing to do with you.


Those were answers to two different questions that you are conflating, but regarding your overarching point, the answer is still: so what? If we have no access to some outside reality (unlike in the movies, there are no deja vus, pills or Neos; the simulation is perfect and we are trapped inside: in fact, we are just bits running through a program in a supercomputer in another universe), then it doesn't really matter *to us* whether we attempt to discover the Grand Unifying Theory of Everything, or the Grand Unifying Logic of our Simulation, as they are one and the same. We will *never* have access to the outside perspective. It is therefore not a question of science, but of faith (and as such, thoroughly uninteresting to me: just as I reject the existence of God because there doesn't seem to be any evidence for his existence, I reject this digital reincarnation of God for the exact same reasons). Until someone thinks up an experiment that would distinguish between a "real" reality (whatever the fuck that even is... you see the problem here?) and a "simulated" reality, the difference is entirely in the domain of theology.

As for certainty vs. impossibility, I am talking about that underlying theology. You can consider it a bit like Pascal's wager: he came up with a mathematical "proof" for why you should believe in God: the problem isn't in the proof, it's in the underlying assumptions. Similarly, while the mathematical "proof" for why we are living in a simulation is different (and more interesting) than Pascal's wager, it *also* depends on assumptions which you may choose to believe, or not (and being quantitative, you can simply change the numbers there) leading to the different outcomes ranging from "absolutely certain" to "completely impossible". And unfortunately, we only have one perspective here: our own. What you appear to be advocating is to say that "for God, it's easy to see he exists". Well yes, but that isn't what we're arguing about now, is it?

+ Show Spoiler +
You being xMZ I have probably completely misunderstood what you're trying to say, and will similarly be misrepresented when you reply, but carry on
PAINT TIME!
i'll go of what you said there and try to get a (visual)base for the argument:
you see this in two ways: "so what" and "God and/or Simulation".
-for 'so what' + Show Spoiler +
[image loading]

-for God/Sim + Show Spoiler +
[image loading]

the ovals are uncrossable hard boundaries; in 'so what' there are no means of communication between the realms and in God/Sim, the purple arrow shows that the Gods/Simulators can and do exert pressure upon the plebs(God made us, the Matrix keeps the flesh alive .. etc).
is that a fair picture of the main(only) two stances you have there?.

is there a scenario in which you envision a pleb and a God that know of each other but one is indifferent(for the lack of a better word) to the other and viceversa?.
(something along the lines of: there's nothing that a God would do to a pleb that would improve its own situation(the reverse would be also true)so he just doesn't care/give a fuck about plebs).

also, Pascal's wager is a scam; it was coined to apply only to religious/fearful of a potential God humans. in its premise it ponders about the existence of a God; that should be the end of the line for any unconstrained conclusions/revelations coming from there.
if there is a God and you know about it, you should give it the finger, it would not care(and that's assuming he knows what the finger is).
And my fury stands ready. I bring all your plans to nought. My bleak heart beats steady. 'Tis you whom I have sought.
Prev 1 689 690 691 692 693 783 Next
Please log in or register to reply.
Live Events Refresh
OSC
00:00
Elite Rising Star #16 - Day 3
Liquipedia
The PiG Daily
22:45
Best Games of SC
Reynor vs Zoun
Classic vs Clem
herO vs Solar
Serral vs TBD
PiGStarcraft526
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft526
Nina 229
trigger 7
StarCraft: Brood War
ggaemo 527
Backho 234
Leta 87
Tasteless 75
Snow 29
Icarus 4
League of Legends
JimRising 880
Counter-Strike
Stewie2K997
Super Smash Bros
Mew2King11
Other Games
tarik_tv8710
summit1g8403
shahzam533
WinterStarcraft502
C9.Mang0402
Maynarde223
NeuroSwarm97
Trikslyr49
Organizations
Other Games
gamesdonequick1047
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• Berry_CruncH300
• practicex 44
• Mapu13
• Kozan
• Migwel
• AfreecaTV YouTube
• intothetv
• sooper7s
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• Diggity6
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Rush1614
• Stunt364
• HappyZerGling80
Upcoming Events
Replay Cast
5h 19m
Afreeca Starleague
5h 19m
JyJ vs TY
Bisu vs Speed
WardiTV Summer Champion…
6h 19m
Creator vs Rogue
MaxPax vs Cure
PiGosaur Monday
19h 19m
Afreeca Starleague
1d 5h
Mini vs TBD
Soma vs sSak
WardiTV Summer Champion…
1d 6h
Clem vs goblin
ByuN vs SHIN
Online Event
1d 19h
The PondCast
2 days
WardiTV Summer Champion…
2 days
Zoun vs Bunny
herO vs Solar
Replay Cast
2 days
[ Show More ]
LiuLi Cup
3 days
BSL Team Wars
3 days
Team Hawk vs Team Dewalt
Korean StarCraft League
3 days
CranKy Ducklings
4 days
SC Evo League
4 days
WardiTV Summer Champion…
4 days
Classic vs Percival
Spirit vs NightMare
[BSL 2025] Weekly
4 days
Sparkling Tuna Cup
5 days
SC Evo League
5 days
BSL Team Wars
5 days
Team Bonyth vs Team Sziky
Afreeca Starleague
6 days
Queen vs HyuN
EffOrt vs Calm
Wardi Open
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Jiahua Invitational
uThermal 2v2 Main Event
HCC Europe

Ongoing

Copa Latinoamericana 4
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
ASL Season 20
CSL Season 18: Qualifier 1
SEL Season 2 Championship
WardiTV Summer 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025

Upcoming

CSLAN 3
CSL 2025 AUTUMN (S18)
LASL Season 20
BSL Season 21
BSL 21 Team A
RSL Revival: Season 2
Maestros of the Game
PGL Masters Bucharest 2025
Thunderpick World Champ.
MESA Nomadic Masters Fall
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.