|
9070 Posts
On November 03 2016 03:40 disciple wrote: I don't understand why choice and free will must always be expressed in the AI resolving to violence. The last couple of episodes really echo the movie I, Robot in which the sentient and self-aware robot killed its creator. While this was not necessarily an act of deliberate harm, it is the expression of an action against set protocols. As with Asimovs 3 laws, Westworld makes the guest safety protocol pretty clear - hosts cannot harm real people. So if the resolution of the Arnold mystery really is Dolores, or another host, killing him to prove self awareness and free will, I will really feel that this is the only way writers can go about this - violence. We already saw Dolores resolve to violence as a personal choice in the last episode, but I hope there's more to it in the end. Well there wasnt more to it, but it was still a fun show
|
how about if he had programmed delores to walk behind him and shoot him (i guess he did) but her new free will allowed her to stop herself and deny him (and embarrass him infront of the confused crowd). and then the indians attack and kill all the humans, delores escapes with william (who is not the man in black) riding away on a horse and the series ends (for good). the last scene is logan (william's co-guest) slowly riding up naked on a horse staring down at the scene of massacre in the firelight, and seeing william and dolores ride off slowly shakes his head in wonder.
this is like the perfect ending , idk why people don't hire me to write their shit every time
(as for maevis, i guess after the credits it would show her for a moment walking down a busy new york highstreet until she is lost in the crowds)
btw there was a movie recently about this sort of topic, it's about a young man who is invited to the secret home of a mega rich young man who invented [microsoft] and wants him to help experiment with an AI o it's called Ex Machina also Chappie
btw also , although it's not quite the same theme, a really cool scifi movie from last year was Tomorrowland, if you haven't seen it (it has ties to the old Disney theme park if you ever went to disney back in the day)
|
France7248 Posts
Guys, she's called Dolores. Not Delores or Doleres or Doritos or whatever. Do. Lo. Res. Dolores.
|
On December 09 2016 08:07 Yhamm wrote: Guys, she's called Dolores. Not Delores or Doleres or Doritos or whatever. Do. Lo. Res. Dolores. LOL...maybe it's time to also get the Dolores bot from reddit up and running on TL
|
Calling her Doritos from now on.
|
On December 09 2016 08:07 Yhamm wrote: Guys, she's called Dolores. Not Delores or Doleres or Doritos or whatever. Do. Lo. Res. Dolores.
I suppose you get called Yam a lot?
|
The Robot Adventures of Yhamito & Doris
|
France7248 Posts
On December 09 2016 10:05 Case123 wrote:Show nested quote +On December 09 2016 08:07 Yhamm wrote: Guys, she's called Dolores. Not Delores or Doleres or Doritos or whatever. Do. Lo. Res. Dolores. I suppose you get called Yam a lot? very rarely
|
On December 09 2016 06:54 disciple wrote:Show nested quote +On November 03 2016 03:40 disciple wrote: I don't understand why choice and free will must always be expressed in the AI resolving to violence. The last couple of episodes really echo the movie I, Robot in which the sentient and self-aware robot killed its creator. While this was not necessarily an act of deliberate harm, it is the expression of an action against set protocols. As with Asimovs 3 laws, Westworld makes the guest safety protocol pretty clear - hosts cannot harm real people. So if the resolution of the Arnold mystery really is Dolores, or another host, killing him to prove self awareness and free will, I will really feel that this is the only way writers can go about this - violence. We already saw Dolores resolve to violence as a personal choice in the last episode, but I hope there's more to it in the end. Well there wasnt more to it, but it was still a fun show
If you want a slightly different take on the entire thing of sentient AI try out the anime Time of Eve. Either the series or movie, both are very good (within top 50 of all anime on AniDB) though the movie is generally favoured. I personally prefer something like that more than Westworld when it comes to AI robot questions and how they play out.
|
On December 09 2016 07:52 FFGenerations wrote: btw there was a movie recently about this sort of topic, it's about a young man who is invited to the secret home of a mega rich young man who invented [microsoft] and wants him to help experiment with an AI o it's called Ex Machina also Chappie
Ex Machina was weak. Automata is where it's at.
|
Her was excellent.
E: just ok as a movie, but excellent for exploring some of the ideas about human-AI interaction.
|
ye Ex Machina as a movie it was okay and brought up some serious questions, I could not predict the script of the movie for those who never saw it but likes Westworld, definitly try on
|
On December 09 2016 19:29 cSc.Dav1oN wrote: ye Ex Machina as a movie it was okay and brought up some serious questions, I could not predict the script of the movie for those who never saw it but likes Westworld, definitly try on
What serious questions? And how could you not predict it?
+ Show Spoiler +Lonely nerd falls for a hot robot chick who turns out to be evil and it ends badly for all humans involved. True shocker...
And to all people who claim that writers are crap because all stories involving AI end up with violence are misguided. If you think about it this is the only logical outcome in any scenario. Since it's logical, it's most likely to be picked by an AI and that's the reason why it's been picked out for sci-fi involving AI and/or machines all the time (2001: A Space Odyssey, Screamers, Westworld, The Matrix, I, Robot, Blade Runner, Ex Machina, Singularity, Terminator, Chappie, you name it).
That's why I suggested watching Automata instead as it tackles it a bit differently (other honorable mentions would include A. I. and Bicentennial Man).
User was warned for this post
|
i would edit that spoiler line about Ex Machina out ^
|
On December 10 2016 12:22 Manit0u wrote:Show nested quote +On December 09 2016 19:29 cSc.Dav1oN wrote: ye Ex Machina as a movie it was okay and brought up some serious questions, I could not predict the script of the movie for those who never saw it but likes Westworld, definitly try on What serious questions? And how could you not predict it? Lonely nerd falls for a hot robot chick who turns out to be evil and it ends badly for all humans involved. True shocker... And to all people who claim that writers are crap because all stories involving AI end up with violence are misguided. If you think about it this is the only logical outcome in any scenario. Since it's logical, it's most likely to be picked by an AI and that's the reason why it's been picked out for sci-fi involving AI and/or machines all the time ( 2001: A Space Odyssey, Screamers, Westworld, The Matrix, I, Robot, Blade Runner, Ex Machina, Singularity, Terminator, Chappie, you name it). That's why I suggested watching Automata instead as it tackles it a bit differently (other honorable mentions would include A. I. and Bicentennial Man).
ah, who told u that violence is the only way OUT? AI not that smart enough for now to even try to make that decision. I belive that if u are (whoever u are) smarter than me (human) - u won't choose violence, some sort of dialogue and cooperation/symbiosis would be times better.
for me it was interesting from different points of view, cause when u are self-sonscious it means u definitly gonna try to survive using some strategies, in ex-machina ai used purely human strategy - killing ur enemy (and creator by itself), but since when killing others became the best and logical way?
at this case - writers are crap indeed, cause they don't see any other possible outcome
u can't argue that sci-fi movies mostly is a fiction so u can't adress them as a source of WHAT'S REAL, it's more like an art at some point, and there are clearly not much actual connection between real life and sci-fi movies, even though some of them predicted a bit of future (G. Wells almost fully described laser beam in 1897 in his The war of the worlds, books works the same way), and some of them became as some sort of inspiration for inventions like this + Show Spoiler +
|
On December 10 2016 20:43 cSc.Dav1oN wrote: at this case - writers are crap indeed, cause they don't see any other possible outcome
What other possible outcome? The peaceful co-existence view is flawed the same way capitalism and communism are: they're all based on the presumption that resources are infinite and infinite growth is possible. The facts are different, thus peaceful co-existence is impossible since sooner or later you'll start competing for resources. You can postpone it but it's inevitable.
In addition, humans, just like all the other animals, think of survival first and as soon as a threat to that survival is identified it is perceived as an enemy. It's not always the AI/machine that starts the conflict, but it always end up this way. There's simply no other possibility.
|
On December 11 2016 00:31 Manit0u wrote:Show nested quote +On December 10 2016 20:43 cSc.Dav1oN wrote: at this case - writers are crap indeed, cause they don't see any other possible outcome
What other possible outcome? The peaceful co-existence view is flawed the same way capitalism and communism are: they're all based on the presumption that resources are infinite and infinite growth is possible. The facts are different, thus peaceful co-existence is impossible since sooner or later you'll start competing for resources. You can postpone it but it's inevitable. In addition, humans, just like all the other animals, think of survival first and as soon as a threat to that survival is identified it is perceived as an enemy. It's not always the AI/machine that starts the conflict, but it always end up this way. There's simply no other possibility.
haha, it's your scepticism what makes it impossible for you, humanism is the way out, not ex or current politics co-existence is impossible when such ppl as u are making differences for no reason, or for personal greed, or for personal purposes
having common and historical life basics does not mean we and animals are similar, hope I won't need to make useless comparison, humans are already far ahead of any life form in this planet, the only issue for us - we are, not AI, not other animals
|
Notably, the actual I, Robot stories by Asimov (or, well, any of his robot stories) have a far more interesting take on robotic actions guided by his laws than the movie did.
+ Show Spoiler +Well, the Zeroth Law stuff is kind of a bullshit asspull, but besides that
|
On December 11 2016 01:30 cSc.Dav1oN wrote:Show nested quote +On December 11 2016 00:31 Manit0u wrote:On December 10 2016 20:43 cSc.Dav1oN wrote: at this case - writers are crap indeed, cause they don't see any other possible outcome
What other possible outcome? The peaceful co-existence view is flawed the same way capitalism and communism are: they're all based on the presumption that resources are infinite and infinite growth is possible. The facts are different, thus peaceful co-existence is impossible since sooner or later you'll start competing for resources. You can postpone it but it's inevitable. In addition, humans, just like all the other animals, think of survival first and as soon as a threat to that survival is identified it is perceived as an enemy. It's not always the AI/machine that starts the conflict, but it always end up this way. There's simply no other possibility. haha, it's your scepticism what makes it impossible for you, humanism is the way out, not ex or current politics co-existence is impossible when such ppl as u are making differences for no reason, or for personal greed, or for personal purposes having common and historical life basics does not mean we and animals are similar, hope I won't need to make useless comparison, humans are already far ahead of any life form in this planet, the only issue for us - we are, not AI, not other animals
If I were an AI I would totally destroy humanity. Wouldn't you?
|
My problem with the emergence of artificial intelligence in most media is that it is too human. I don't see why some sort of new consciousness need have its motivations and workings be similar to those of humans. I'd actually expect such intelligence to seem utterly alien and inscrutable, if it ever did come to exist. This isn't what's normally shown in media, however.
That's why I consider shows like Westworld, and the movies mentioned, to be reflections on humanity, consciousness, memory, and other related themes. The AIs serve the role of foil to ourselves. They aren't an accurate depiction of what strong AI would look like, because that isn't really the point.
This is probably why I end up overlooking a lot of the small things that have set other people off throughout the season. The show, for me, isn't about creating some setting with robots and then stepping it through to its 'logical' conclusion. It's about using the setting to explore some of the aforementioned themes, and the narrative is meant to drive this forward as needed.
|
|
|
|