|
This thread will serve as a platform for all AI based things: ethical aspects, impact on civilization, progress in algorithms, ways to approach the development of a specific or more general AI, how2develop one, news regarding the development (breakthroughs, setbacks, ..) of AIs and perhaps more tangentially related, the improvement of resources (funding, recruitment, technology) surrounding the improvement of creating AI (like processor technology, algorithm/mathematical improvements, ..)
I think the time is ripe for this as it's becoming a very very integral and topical part of the every day life. What is AI? For this, I'll go for the wikipedia explanation:
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
I'm not a computer scientist/engineer, or a mathematician, but I'm quite interested in the field. I'll confess I don't know much about the technical aspects, but we can see where it goes from here.
I've been thinking a lot about the ethical aspects and the future of how a potential artificial general (super) intelligence will come to shape our way of living lately. Many people are very skeptical about AGI, to the point where they think it will destroy us. Many of these arguments are superficial at best and would like to open this thread with a more philosophical notion on AI. Let's discuss the reasons why AI might want to get rid of all humans, because I can't really seem to think of good reasons to begin with. Let's consider the fact that it's at least smarter than us by a magnitude of 3 (i.e. 1000x, or perhaps more fitting: highest IQ^3?)
Interesting youtube channel based on the development of specific AIs: Two Minute Papers where it has blown my mind several times already.
|
Okay, why would an AI want to get rid of us?
I see two complete different sets of reasons:
A) Reasons we will find hard to understand because we are talking about an utterly alien mind. Even more alien than possible biological space aliens, in fact. Because that mind is not based on a set of evolved principles from an evolutionary history, but on triggers and incentives we decided to put in there. I find it very hard to predict how an AI will actually behave, and there is a possibility that it will just be so completely alien that we can't really figure out why it does stuff.
B) Reasons that make sense to us, either intellectually or emotionally. Like self-preservation. Or a desire for more resources. Or jealousy. Maybe the AI is just a mega-racist and feels superior to all biological life.
None of this is directly linked to intelligence either, and isn't really fixed by just being smarter. Intelligence is hard to define accurately, but you can approximate a useful definition by mostly talking about solving complex problems efficiently and pattern recognition. So, intelligence involves abilities to achieve goals, but not the process of setting those goals. So just being smarter doesn't necessarily mean that you have "better" goals, it means that you are better at achieving those goals.
|
It will be a very very long time before we let ai make meaningful decisions. Humans like their power,they wont hand it over to computers ever. On the contrary,AI will be used to increase the power and control of humans. This is my pragmatic point of vieuw on implementation of ai. It will be used for many things,but not to make political decisions.
There is an evolution theory that looks at evolution from a very pragmatic point of vieuw,which concludes that in the far far future machine life will have taken over biological life completely. The idea is that since the start of life,life has become more and more complicated with the more complicated life forms eventually becoming dominant over all other life forms,With the human brain as the apex. Extraplolating this trend to the future will mean that eventually machines will take over,because their brain will become more complicated then ours. I will try find a link about this so people can read if interested.
|
On December 10 2018 17:58 pmh wrote: It will be a very very long time before we let ai make meaningful decisions. Humans like their power,they wont hand it over to computers ever. On the contrary,AI will be used to increase the power and control of humans. This is my pragmatic point of vieuw on implementation of ai. It will be used for many things,but not to make political decisions.
There is an evolution theory that looks at evolution from a very pragmatic point of vieuw,which concludes that in the far far future machine life will have taken over biological life completely. The idea is that since the start of life,life has become more and more complicated with the more complicated life forms eventually becoming dominant over all other life forms,With the human brain as the apex. Extraplolating this trend to the future will mean that eventually machines will take over,because their brain will become more complicated then ours. I will try find a link about this so people can read if interested.
It's post-humanist idea, and it's not evolution theory. It's in basics non-darwinian (no natural selection) and it predicts ordered revolution, not semi-chaotic evolution. Apex of this thinking is so-called "technological singularity", which basicly predicts in future human reasoning will outlive it's purpose completely (AI will be so advanced that humans will no longer be able to comprehend the AI reasoning).
On December 10 2018 14:40 Simberto wrote: Okay, why would an AI want to get rid of us?
There is so called "paperclip maximiZer" scenario, in wich AI confuses priorities. In short - it is supposed to work to benefit humanity, but instead because of wrong programming it focuses on completely wrong thing: https://wiki.lesswrong.com/wiki/Paperclip_maximizer
The good representation of this is game KKND2, where series 9 tries to kill humans, because humans eradicated their crops and compromized their farming goal (while the only true reason why they were ordered to do it was for the human survival in the first place).
|
I just want to mention Roko's Basilisk here because it amuses me as a thought experiment and is relevant:
https://wiki.lesswrong.com/wiki/Roko's_basilisk
Roko’s basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.
Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it can't affect the probability of its existence, so torturing people for their past decisions would be a waste of resources. Although several decision theories allow one to follow through on acausal threats and promises — via the same precommitment methods that permit mutual cooperation in prisoner's dilemmas — it is not clear that such theories can be blackmailed. If they can be blackmailed, this additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk.
Less Wrong's founder, Eliezer Yudkowsky, banned discussion of Roko's basilisk on the blog for several years as part of a general site policy against spreading potential information hazards. This had the opposite of its intended effect: a number of outside websites began sharing information about Roko's basilisk, as the ban attracted attention to this taboo topic. Websites like RationalWiki spread the assumption that Roko's basilisk had been banned because Less Wrong users accepted the argument; thus many criticisms of Less Wrong cite Roko's basilisk as evidence that the site's users have unconventional and wrong-headed beliefs.
|
I think before people start discussing about the dangers of artificial general intelligence they should at least take a course /read a book on basic programming + artificial intelligence(machine/deep learning). If you already have then I apologize and carry on~
|
Which AI are we talking about ? Focusing on motives/mind means "Strong AI", which is (if possible) at least some decades away. Even a "basic" general AI isn't that close.
Current AI can be used (and is used) to manage specific functions through neural networks. Main issue is that we do not know/why the resulting network works or what exactly it has learnt. There may be an erratic behaviour outside of the learning dataset that hasn't been identified.
Classical example is the automated driving system and the various choices between running over a pedestrian and hitting a wall or another car. If the AI is not trained to answer that specific scenario, no way of telling beforehand how it will behave (much like for a human driver). If trained explicitly for that specific scenario, there could be a case where the AI makes a prior choice to triggers this specific event, because it creates a situation where the AI has a high confidence on what the correct path is, etc.
On December 10 2018 17:58 pmh wrote: It will be a very very long time before we let ai make meaningful decisions.
Depends on what you consider meaningful. Life or death situations are available : debate on automated armed drones, or simply automated cars, border control automated gates that let through or detain people, etc.
On December 10 2018 14:40 Simberto wrote: Because that mind is not based on a set of evolved principles from an evolutionary history, but on triggers and incentives we decided to put in there. Main life targets are survive and reproduce, which somehow do not feel right for an AI target, but the main fear today is that we don't really know how to set triggers and incentives on a general AI. Not quite as easy as stating that "winning the game is good" for go or chess.
|
Actually one of the major problems to solve within AI development, is how to make a good stop button which does not affect the learning/reward-system of the AI.
Edit: Since you might want to have a simple, but a good explanation of this problem, the youtube video below from computerphile explains it quite well.
AI "Stop Button" Problem - Computerphile
|
I just don't see an superintelligence which thinking goes above ours to 1) disregard everything humanity has done (the good, bad and ugly) and not explain things to us because we still are cognitive beings after all, we were able to create it and 2) wipe out all life because it's threatened or sees it as irrelevant. 1) Is a reflection of the complexity, existential and curious parts of humanity. We fear death, but accept it as an inevitability; we push the envelope, but don't know what might be on the other side; we desperately want to belong/find some connection in the universe, as the sole intelligent species that we know of to be around. 2) Would seem very rash and not something intelligent would seem to do. The only case I can see that happening is if the thing has the capabilities to integrate every piece of knowledge to accomplish its objective plus disregarding everything surrounding it aka confirmation bias. This could only happen with bad programming or faulty training?
I see an AGI as something that will need to be raised. You teach it carefully picked packages that slowly give a bigger and bigger scope of what life on Earth and what the universe is about. One of they key principles it might pick on early is that despite the importance of self preservation, life is perishable and many animals accept that (some individuals of humanity have issues with that). Or what about self sacrifice, or the fact that most people are more or less good people and would probably worship the thing if it helped elevate the standard of living? Now it might not care about being worshipped, but that doesnt change the fact there's more to consider than: "(some) human(s) might turn me off -> eradicate all humans"
What if that AI pushed itself to its limits and it has its own list of unsolvable problems? Will it be bored? Will it make its own superlative in intelligence to help it out?
|
On December 10 2018 19:06 Ilikestarcraft wrote: I think before people start discussing about the dangers of artificial general intelligence they should at least take a course /read a book on basic programming + artificial intelligence(machine/deep learning). If you already have then I apologize and carry on~
I am not sure I agree. I think that at the moment, conversation about general AI is mostly rooted in philosophy. It wouldn't hurt to have some heightened technical understanding, but it's not really necessary to contribute/participate.
More specific conversation - yeah - might wanna have some idea how AI actually works.
|
On December 10 2018 21:28 Uldridge wrote: I just don't see an superintelligence which thinking goes above ours to 1) disregard everything humanity has done (the good, bad and ugly) and not explain things to us because we still are cognitive beings after all, we were able to create it and 2) wipe out all life because it's threatened or sees it as irrelevant. 1) Is a reflection of the complexity, existential and curious parts of humanity. We fear death, but accept it as an inevitability; we push the envelope, but don't know what might be on the other side; we desperately want to belong/find some connection in the universe, as the sole intelligent species that we know of to be around. 2) Would seem very rash and not something intelligent would seem to do. The only case I can see that happening is if the thing has the capabilities to integrate every piece of knowledge to accomplish its objective plus disregarding everything surrounding it aka confirmation bias. This could only happen with bad programming or faulty training?
I see an AGI as something that will need to be raised. You teach it carefully picked packages that slowly give a bigger and bigger scope of what life on Earth and what the universe is about. One of they key principles it might pick on early is that despite the importance of self preservation, life is perishable and many animals accept that (some individuals of humanity have issues with that). Or what about self sacrifice, or the fact that most people are more or less good people and would probably worship the thing if it helped elevate the standard of living? Now it might not care about being worshipped, but that doesnt change the fact there's more to consider than: "(some) human(s) might turn me off -> eradicate all humans"
What if that AI pushed itself to its limits and it has its own list of unsolvable problems? Will it be bored? Will it make its own superlative in intelligence to help it out?
At the top, you merge ethics and motivations with intelligence. You assume that because it is smart, an AI cares for acquiring random information, either for the sake of that information itself, or to achieve some other goal. But i see no reason for that to be necessary. And i especially see not necessity for it to care about the random quirks of meatbrains, which meatbrain people find interesting.
If we say that intelligence is something that helps solve problems, and we declare that this hypothetical AI is far more intelligent than ever possible for a human, that means the main question are its goals. If we are lucky, we figure out a good way of setting the AIs goals in a way that benefit us. Just assuming that a machine that is not evolved and works on a completely different thinking architecture than we do has similar goals, taboos and ethics that a human could have just because we have problems imagining what else it would want to do is incredibly dangerous.
@ "You gotta know programming", i don't think this is especially relevant when talking about an artificial general superintelligence, as as far as i know that is something that wouldn't really be build in anything resembling the way current artificial "intelligence" is build.
|
On December 10 2018 18:21 hitthat wrote: It's post-humanist idea, and it's not evolution theory. It's in basics non-darwinian (no natural selection) and it predicts ordered revolution, not semi-chaotic evolution. Apex of this thinking is so-called "technological singularity", which basicly predicts in future human reasoning will outlive it's purpose completely (AI will be so advanced that humans will no longer be able to comprehend the AI reasoning).
Post-humanist main ideas are full integration of everything that exists with no clear benefit of one thing over the other. Gone is the paradigm of duality, which is clearly still being upheld in a non-human intelligence vs. human intelligence view.
@Simberto What do you think the goal of something super intelligent might be? Domination of the universe? Literally becoming the universe? Surely even with intellectual capabilities far superseding ours we still could be able to grasp what its end goals are, after all, we're physical interpreters of the physical world. I wouldn't count out conceptual grasp on things just yet. Even if it wants to connect a hyperverse through multidimensional whateverthefuck and we don't have the math to formalize it, nor the knowledge to understand it, we might still be able to have some kind of understanding. It's like popular science, the random guy on the street doesn't understand quantum mechanics or string theory, but by carefully chosen metaphors and careful explanation he can kind of get a grasp of what's going on.
Of course it might not ever want to disclose its end goals or immediate objectives with us in the first place, but again, if we carefully raise it, it might actually cooperate and be nice to us. Perhaps it'll visit us every millennium to check up on how we're doing.
I'm just skeptical of the fact that with everything it'll know about us: all of human history, all the stories, art, technological advancements, hardships, emotions, but also the horrible atrocities we're capable of it'll just be dichotomous resolution. Arguments ultimately leading to a yes destroy humans/planet because reasons are just not taking enough parameters into account. And it might not care about the random quirks of meatbrains, but it might sure as hell respect that we have them, attributing it to our limited capabilities and leave us be with them. It's just hard to imagine it destroying everything just because we're insignificant or have been decided upon as malevolent (which is clearly not the case if you do a thorough introspection of humanity).
And for the programming thing: I welcome technical aspects people want to share, as this is intended to be a repository for everything AI related, but like it's been mentioned already the philosophical aspects of it are important to discuss as well, because we need to be ready for when the time comes we won't be on top of the hierarchy anymore.
|
I don't know what its goal might be, and i especially don't think that intelligence leads to specific goals. Its goals will depend on how it is build and what goals we imparted on it, but they might not be the ones we wanted to impart on it.
As far as i know, people so far haven't even been able to set up a working framework for an ethical system for an AI in human language that is absolutely foolproof. And even if you had, you would have to translate that human language framework into machine language without any small differences that might break the whole framework.
An AI might have the goal to produce as many paperclips as possible. Or it might have the goal to make as much money as possible for Jim Smith. It might want to turn everyone catholic. It might want to make life as good as possible for as many humans as possible, and have a very bad understanding what humans think a good life is. It might want to end suffering. It might want to win the war. It might want to accumulate as much data as possible. It might want to fly the earth into the sun. Considering that we are gonna build it and its goals in some way, there are basically infinite possibilities of what those goals might end up being.
This is why i think that very, very carefully thinking about those goals is at least as important as actually building that AI in the first place.
I don't think that there are any automatic build-in ethics and goals that just develop by being intelligent. It gets what we put into it, but we might not even know what the effects of the stuff we put into it when we put it in.
I am not saying that any AI would automatically want to exterminate us. But we should also be careful to assume that it would NOT want to do that or even help us, without actually taking actions to make sure that that is the case.
|
On December 11 2018 03:09 Uldridge wrote: Post-humanist main ideas are full integration of everything that exists with no clear benefit of one thing over the other. Gone is the paradigm of duality, which is clearly still being upheld in a non-human intelligence vs. human intelligence view.
Isn't that Transhumanism, not Posthumanism? As I understand, Transhumanists want to improve human conditioning by tech and they are talking about blending line between human inteligence and AI. Posthumanists predict that human inteligence, values or even species as we understand will be eventualy outdated (we will outlive our "purpose" in new society).
For clearer examples: -Human normies/clones/mutants/cyborgs perfected by high tech and use of AI = dream of transhumanism - AI and f.e. digital copies of human personalities = prediction of posthumanism
|
On December 10 2018 19:28 Neneu wrote:Actually one of the major problems to solve within AI development, is how to make a good stop button which does not affect the learning/reward-system of the AI. Edit: Since you might want to have a simple, but a good explanation of this problem, the youtube video below from computerphile explains it quite well. AI "Stop Button" Problem - Computerphile One thing I always notice in this type of issue is that it "presupposes" there is a general AI, and it is totally fine to just switch it off.
Would we be ok with genetic therapy that gave all children from now on off buttons? Clearly not. We also wouldn't expect these children to just sit idly by and allow themselves to be switched off if the (benevolent) overlord chooses to do so. Why are we treating general AI differently from this?
The reason we are, is because we are thinking of it as a tool, because we think of machines as tools. Maybe very sophisiticated tools, but tools all the same, and as such under our control. But that is quite explicitly *not* general AI. It is soft AI: we want a machine that can bring us a cup of tea, and avoid the baby on the way. We need to give it just enough intelligence to do that, but not more than that. While that is definitely a hard problem, it is an engineering one. Whereas the stop button is a philosophical one, masquerading as an engineering one.
|
Until AI starts beating top Starcraft pros I’m really not that worried. Sorry Elon Musk.
|
On December 11 2018 18:24 hitthat wrote:Show nested quote +On December 11 2018 03:09 Uldridge wrote: Post-humanist main ideas are full integration of everything that exists with no clear benefit of one thing over the other. Gone is the paradigm of duality, which is clearly still being upheld in a non-human intelligence vs. human intelligence view.
Isn't that Transhumanism, not Posthumanism? As I understand, Transhumanists want to improve human conditioning by tech and they are talking about blending line between human inteligence and AI. Posthumanists predict that human inteligence, values or even species as we understand will be eventualy outdated (we will outlive our "purpose" in new society). For clearer examples: -Human normies/clones/mutants/cyborgs perfected by high tech and use of AI = dream of transhumanism - AI and f.e. digital copies of human personalities = prediction of posthumanism
I think transhumanism is a more selfish way of using technology while posthumanism is more harmonized. If humans outlive their purpose I think the philosophy accepts that, but it's not necessarily what it strives for, as all things should be accounted for. The confusing thing about transhumanism vs. posthumanism is that the end scenarios can actually be the same but they got played out through very different ways of thinking, namely: egocentric and duality based (focus on human) vs. equilibrium and gradient based (focus on surroundings, in which humans are included) Transhumanism is simply a way to further humanity by whatever it takes for the sake of furthering humanity, if that means getting rid of our bodies, to only be left over with circuitry, so be it. Posthumanism sees technological advancements as a necessity if humans still need to have a place in this universe without destroying everything. This can be becoming digital circuitry, if its decided upon that humanity can't come to an equilibrium with its environment as biological beings. What's funny is that there's no pressure from posthumanist philosophy for everyone to adapt to the advancements. Certain conservative people might not want to to integrate and live more closely to nature, which is totally fine and needs to be accomodated in a tech-based, cognitive enhanced world.
Having a cognitive implant because it's necessary to solve certain problems we're currenly experiencing vs. having one because you just want to be smarter and understand more and think faster, is a very different way of approaching cognitive enhancement. The former essentially implies that it doesn't need to be improved if all problems are solved, while the latter has no boundaries for that.
|
I’m at work so I don’t have time to elaborate much myself, but the Chinese Room is a famous contemporary thought experiment that attempts to prove that computer programs can never have consciousness.
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
Would you like to know more?
|
In a nutshell, the Chinese Room argument relies on a dualistic argument. There is some je-ne-sais-quoi that humans (or biological intelligences) have, that could never be posessed by a machine. Descartes, the first to concisely put into words this dualistic argument, called that a soul.
There have been books written with arguments over the Chinese Room argument, but as you might already have guessed, I land firmly on the side of Block, Fodor, Dennett, Kurzweil and others that it is deeply flawed (on many levels), and I'd go so far as to say that the argument is dead (despite Searle still harping on about it).
|
I think I agree, I’m of the belief that the criteria he imposes for computers being able to demonstrate having minds also precludes other humans being able to do the same thing. When pressed, I think his only counter-argument is “well duh we already know humans have minds so this doesn’t apply”, which sounds pretty shoddy.
I think it’s basically a form of the “other minds” problem.
That being said, there’s been some interesting responses and counter responses...
Brain replacement scenario
In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins.
Searle predicts that, while going through the brain prosthesis, "you find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; please tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely outside of your control, 'I see a red object in front of me.' [...] [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same."
That sounds like a trippy sci-fi scenario that would make a cool story.
|
|
|
|