|
Read the rules in the OP before posting, please.In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up! NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious. Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action. |
On August 16 2016 05:40 Evotroid wrote:Show nested quote +On August 16 2016 05:35 RoomOfMush wrote:On August 16 2016 05:29 Evotroid wrote:On August 16 2016 05:11 RoomOfMush wrote:On August 16 2016 04:40 Gorsameth wrote:On August 16 2016 04:38 RoomOfMush wrote: Its pretty much impossible to do (good) news with artificial intelligence. For a computer this is a task which can only be managed with the help of statistics and heuristics. Both of which can easily be manipulated by anybody who knows the algorithms behind it. And how would it not be able to check that proper procedure was followed? Or discard those that do not provide adequate documentation of said procedure? Because thats how computers work. The computer can only do the things you teach it to do. If the human programmer does not know how to check pieces of news for proper procedure then its impossible for the AI to do it too. And the AI can not use gut feeling. You have to define a step-by-step guide of atomic actions to take to determine whether a piece of news is trustworthy, correct, relevant, etc to teach your news-AI how to work. I dont know how you could evaluate all news in such a way but if you know it then please explain it to me. (...) That is not actually how computers work. There is no inherent limitation of that kind, and in fact, there are ways to show a program the desired outcome, a bunch of problems, and let it figure out how to solve the problems even if the programmer himself does not know how to solve them. (though it is not necessarily true in this case). No. How do you come up with an idea like that? Computers are very very simple tools, they perform one action at a time and the actions are incredibly simple. They can not come up with anything and they can not expand themselfs. Every piece of code in software has to come from a programmer. The software can never do anything that was not put there by a programmer. That is the fundamental idea behind all computers of today. Of course there is theoretical stuff of computers that actually use true randomness to produce results which are not deterministic (at least to our current understanding of physics) but these are not useable in real life at this point in time. All computers we have a fully deterministic and work sequentially. I don't "come up" with stuff like that, unlike you. Being fully deterministic and working sequentially has nothing to do with the ability of self programming or the like. And to show that I am not talking out of my ass: Link to pdf from caltech, second on google after a paywalled article Did you actually read that article? I just did and I can tell you that it says exactly what I have been saying. The AI described in that paper uses an incredibly simple algorithm with a very well defined set of rules and deterministic behavior coupled with statistical analysis and heuristics to calculate good matches. Its very much not able to "think" or come up with anything outside of its expected behavior.
|
United States41991 Posts
On August 16 2016 06:10 LegalLord wrote:Show nested quote +On August 16 2016 06:06 KwarK wrote:On August 16 2016 06:01 LegalLord wrote:On August 16 2016 05:56 KwarK wrote:On August 16 2016 05:49 Plansix wrote: Humans can’t even agree on what objectivity is. We cannot obtain objectivity on our own within the complexity of our own mind. There is no way we can create an AI do to it for us. We will just create a thing that believes it is devoid of bias and therefore objective.
And frankly we have enough of those on the internet already, committing of video game subreddits.
We don't need every single person to agree to understand what would make news good news. "A car bomb killed 64 people in a market in Baghdad at 07:14 local time this morning" is a fairly simple statement of what happened, right? Whereas "Cowardly terrorists murdered dozens of innocent civilians in the latest attack in this spree of similar atrocities" includes slanted and subjective language, misuses legal terminology, sacrifices accuracy for vague hyperbole and expands upon the basic facts of the event to paint a broader picture. I'm sure a journalism or communications major here can tell us how language can be constructed to evoke a specific response. Language cannot be defined in a way specific enough to computers that you can use it that way. And what about stories that are lies? Is the statement "US bombers destroy terror cell in Aleppo" a biased statement? What if it's debated whether a group is a terrorist organization in the first place, such as the Kurds? I'd argue yes. You could go with US bombers destroy a group defined as a terror cell by the US government, in Aleppo. As for stories that are lies, check for multiple sources. If all the sources are lying already you'd have no way of knowing, I don't know why you'd hold an AI to a higher standard than could be conceivably applied elsewhere. But given the way that an AI can cluster metadata it could probably verify shit pretty fucking well. A cluster of phone calls, a regional market shift etc could be physical direct feedback that something has happened in a location. I'm just making it up as I go here though but with enough data to work with an AI could tell the difference between events that happened and events that you made up. Congratulations, by your first sentence you have already introduced political bias into your AI. If it follows government directives then the Armenian Genocide never happened as well. Multiple sources can repeat the same lie, easily. Especially real time sources like Twatter. Not to mention how you could abuse that system to make up lies if you figure out how the system crawls. Look at "search engine optimization" being an actual field to see that AIs can be manipulated to do what a third party wants it to do. Show nested quote +On August 16 2016 06:07 Gorsameth wrote:On August 16 2016 06:01 LegalLord wrote:On August 16 2016 05:56 KwarK wrote:On August 16 2016 05:49 Plansix wrote: Humans can’t even agree on what objectivity is. We cannot obtain objectivity on our own within the complexity of our own mind. There is no way we can create an AI do to it for us. We will just create a thing that believes it is devoid of bias and therefore objective.
And frankly we have enough of those on the internet already, committing of video game subreddits.
We don't need every single person to agree to understand what would make news good news. "A car bomb killed 64 people in a market in Baghdad at 07:14 local time this morning" is a fairly simple statement of what happened, right? Whereas "Cowardly terrorists murdered dozens of innocent civilians in the latest attack in this spree of similar atrocities" includes slanted and subjective language, misuses legal terminology, sacrifices accuracy for vague hyperbole and expands upon the basic facts of the event to paint a broader picture. I'm sure a journalism or communications major here can tell us how language can be constructed to evoke a specific response. Language cannot be defined in a way specific enough to computers that you can use it that way. And what about stories that are lies? Is the statement "US bombers destroy terror cell in Aleppo" a biased statement? What if it's debated whether a group is a terrorist organization in the first place, such as the Kurds? What if it didn't actually happen? Yes, because a terrorist is not an objective description (one mans freedom fighter is another mans terrorist and all that). "US bombers kill 13 in Aleppo bombing, US official claims terrorist connections, ISIS denies" is a much less biased statement. Then you will have a hell of a hard time assigning any label to anything. "US claims it is a nation, ISIS denies." You're assuming that with time, effort, money and continual improvement all the flaws would remain unfixed. I think otherwise. With enough time, enough examples and enough access to data a computer would be able to find verifiable primary evidence that makes it believe a given thing is true.
As for the Armenian Genocide. It is factually true that the Turkish government denies the Armenian Genocide. Nothing wrong with a headline reading "Today a memorial service was held for the victims of the event the UN recognizes as the Armenian Genocide, a name Turkey disputes".
|
On August 16 2016 06:15 Evotroid wrote:Show nested quote +On August 16 2016 06:09 RoomOfMush wrote:On August 16 2016 06:00 Evotroid wrote:On August 16 2016 05:45 RoomOfMush wrote:(...) On August 16 2016 05:40 Evotroid wrote: Also, easiest example: completely simulate a human brain with computer, do you accept that a human brain can learn on it's own? bam then a computer can as well. But nobody ever managed to simulate a human brain. We dont even know how human brains work. We dont know if human brains are deterministic or not. Our computers are. If brains are not then our computer can not simulate brains. Again, that we haven't did it yet, does not mean we can not do it at all. Secondly, we know how brains work on the small scale, we just don't understand all the intricaties arising from the complex whole, but that does not mean we can't simulate it. Again, it may be true what you say, but it does not follow from that, that we can't simulate one, or that we can't make programs, that learn on their own. But feel free to produce a counter source/paper, that shows to that effect. Finally, our computers are not deterministic. Seriously. We would like them to be, we build them to be, and we use them as they were, but they are not. In fact, on average something like 1% of the clients failed a simple test built into Guild Wars (iirc) that tested the cpu for deterministic errors. They just fail. There is nothing stopping us to make intentionally indeterministic hardware or even just pseudo random generators, that in practice produce the desired effect. No. Our computers are fully deterministic. If you say otherwise then you either dont understand what determinism means or you dont understand how computers work. There are "some aspects" of computers which we dont know whether they are deterministic or not because of limited knowledge of physics. For example: What effects does radiation or magnetism have on computer hardware? And are these effects deterministic? We dont know that. But apart from that computers are completely deterministic. They are just very very complex and difficult to understand. This is not because they are not deterministic, it is because we are too stupid to comprehend all the information that is needed to predict their behavior. And I never said its impossible to simulate the human brain. I say we can not simulate a non-deterministic system with deterministic devices. We know computers are deterministic but we dont know whether brains are. On August 16 2016 05:56 KwarK wrote:On August 16 2016 05:49 Plansix wrote: Humans can’t even agree on what objectivity is. We cannot obtain objectivity on our own within the complexity of our own mind. There is no way we can create an AI do to it for us. We will just create a thing that believes it is devoid of bias and therefore objective.
And frankly we have enough of those on the internet already, committing of video game subreddits.
We don't need every single person to agree to understand what would make news good news. "A car bomb killed 64 people in a market in Baghdad at 07:14 local time this morning" is a fairly simple statement of what happened, right? Whereas "Cowardly terrorists murdered dozens of innocent civilians in the latest attack in this spree of similar atrocities" includes slanted and subjective language, misuses legal terminology, sacrifices accuracy for vague hyperbole and expands upon the basic facts of the event to paint a broader picture. I'm sure a journalism or communications major here can tell us how language can be constructed to evoke a specific response far better than I can with my shitty examples. But that is really really hard and needs to be known to the programmer in order to teach it to the computer. And then, in the future, somebody will find ways to trick the computer because they reverse-engineered the algorithm behind it and abuse cases the programmer did not think of. Look at what google scholar does with scientific research papers. Thats more or less a simplified version of a news AI. It grades scientific papers for their "value" based on certain easily identifiable characteristics. It works for the most part but there are people who abuse its weaknesses. Okay, here is a test: you give a simple problem to your subjects multiple times, like: "1+1=?". Subject A gives you the answer, without fail, 10 000 times in a row as "2" Subject B however a few times out of 10 000 gave you other answers, like "3" to the same problem, with no apparent pattern, utterly unpredictable. Which of the subjects would you classify as deterministic, and which would you not? Neither because that has nothing to do with determinism. Both could be completely undeterministic and both could be deterministic. You can not know determinism by observing something. Its a meta-attribute which never manifests itself in the real world. In both cases we can not know whether each of them will be correct the 10 001 time because we dont know if they are deterministic.
|
Hungary176 Posts
On August 16 2016 06:17 RoomOfMush wrote:Show nested quote +On August 16 2016 05:40 Evotroid wrote:On August 16 2016 05:35 RoomOfMush wrote:On August 16 2016 05:29 Evotroid wrote:On August 16 2016 05:11 RoomOfMush wrote:On August 16 2016 04:40 Gorsameth wrote:On August 16 2016 04:38 RoomOfMush wrote: Its pretty much impossible to do (good) news with artificial intelligence. For a computer this is a task which can only be managed with the help of statistics and heuristics. Both of which can easily be manipulated by anybody who knows the algorithms behind it. And how would it not be able to check that proper procedure was followed? Or discard those that do not provide adequate documentation of said procedure? Because thats how computers work. The computer can only do the things you teach it to do. If the human programmer does not know how to check pieces of news for proper procedure then its impossible for the AI to do it too. And the AI can not use gut feeling. You have to define a step-by-step guide of atomic actions to take to determine whether a piece of news is trustworthy, correct, relevant, etc to teach your news-AI how to work. I dont know how you could evaluate all news in such a way but if you know it then please explain it to me. (...) That is not actually how computers work. There is no inherent limitation of that kind, and in fact, there are ways to show a program the desired outcome, a bunch of problems, and let it figure out how to solve the problems even if the programmer himself does not know how to solve them. (though it is not necessarily true in this case). No. How do you come up with an idea like that? Computers are very very simple tools, they perform one action at a time and the actions are incredibly simple. They can not come up with anything and they can not expand themselfs. Every piece of code in software has to come from a programmer. The software can never do anything that was not put there by a programmer. That is the fundamental idea behind all computers of today. Of course there is theoretical stuff of computers that actually use true randomness to produce results which are not deterministic (at least to our current understanding of physics) but these are not useable in real life at this point in time. All computers we have a fully deterministic and work sequentially. I don't "come up" with stuff like that, unlike you. Being fully deterministic and working sequentially has nothing to do with the ability of self programming or the like. And to show that I am not talking out of my ass: Link to pdf from caltech, second on google after a paywalled article Did you actually read that article? I just did and I can tell you that it says exactly what I have been saying. The AI described in that paper uses an incredibly simple algorithm with a very well defined set of rules and deterministic behavior coupled with statistical analysis and heuristics to calculate good matches. Its very much not able to "think" or come up with anything outside of its expected behavior.
Surely, if the expected behaviour is to solve the problem, I mean, I do not argue that it could rise against it's programmers, just that it can solve problems, without the programmer directly telling it how to solve them.
|
The legal status of some nations is very much a matter of perspective. Sometimes the names of cities hold deep political importance. There is no software that can determine if something is biased or not because the very act of programming it for what would be biased required us to decide what bias is.
And once the system was in place, the people writing the articles would attempt to game system.
|
I think people take for granted all the basic instinct and knowledge that humans can not simply turn off, that a computer simply can't do without an inordinate amount of effort.
If I showed you a photo and asked you to name three things in it, you can do that easily.
If I showed a photo to a computer, it will take hours of programming just to inform it each pixel of colour actually combine together to create a collective object. Then thousands of hours to teach it where one object begins and another one ends.
As a fun aside, I think there was a learning AI that was tested to see if it could tell the difference between a photo of a dog or wolf. After getting input from the initial photos, it got up to a decent level of accuracy...by checking how much white was in the photo. Turns out wolves have a much higher chance of being in photos with snow.
You can get an AI to compare results and try to find common factors. But either it's so specific that it can be gamed or reflects entirely on the developer's own biases, or it's so abstract that it's comparison points may have nothing to do with the content you want.
|
On August 16 2016 06:09 Gorsameth wrote:Show nested quote +On August 16 2016 06:07 oBlade wrote:On August 16 2016 05:53 Mohdoo wrote:On August 16 2016 05:52 zlefin wrote: So, from what I read of trump's speech today; it seemed rather bland and unremarkable. not as crazy as his usual stuff, but nothing notably helpful either. The question no one seems able to answer is: What is something Trump can say that will help him? What can he do that pulls him out of the ditch? He seems totally trapped and doesn't appear to really have anything he can do. There are people who in no universe will ever conceive of voting for him, but that's not who he has to go after to win. Could someone who was going to vote for Trump but considered the Khan comments to be unfit of a president be convinced to switch back to Trump? I would find it hard to believe anything Trump says could appease those sort of people. And they are the people he needs to win back. Yes, time heals all wounds. The issue is we know the media isn't going to change in 3 months, so it falls to him to avoid these traps that then get printed in every newspaper and run and commented on every cable news station for days. Once the debates get going, the race takes a different tone.
|
United Kingdom13775 Posts
On August 16 2016 06:18 KwarK wrote:Show nested quote +On August 16 2016 06:10 LegalLord wrote:On August 16 2016 06:06 KwarK wrote:On August 16 2016 06:01 LegalLord wrote:On August 16 2016 05:56 KwarK wrote:On August 16 2016 05:49 Plansix wrote: Humans can’t even agree on what objectivity is. We cannot obtain objectivity on our own within the complexity of our own mind. There is no way we can create an AI do to it for us. We will just create a thing that believes it is devoid of bias and therefore objective.
And frankly we have enough of those on the internet already, committing of video game subreddits.
We don't need every single person to agree to understand what would make news good news. "A car bomb killed 64 people in a market in Baghdad at 07:14 local time this morning" is a fairly simple statement of what happened, right? Whereas "Cowardly terrorists murdered dozens of innocent civilians in the latest attack in this spree of similar atrocities" includes slanted and subjective language, misuses legal terminology, sacrifices accuracy for vague hyperbole and expands upon the basic facts of the event to paint a broader picture. I'm sure a journalism or communications major here can tell us how language can be constructed to evoke a specific response. Language cannot be defined in a way specific enough to computers that you can use it that way. And what about stories that are lies? Is the statement "US bombers destroy terror cell in Aleppo" a biased statement? What if it's debated whether a group is a terrorist organization in the first place, such as the Kurds? I'd argue yes. You could go with US bombers destroy a group defined as a terror cell by the US government, in Aleppo. As for stories that are lies, check for multiple sources. If all the sources are lying already you'd have no way of knowing, I don't know why you'd hold an AI to a higher standard than could be conceivably applied elsewhere. But given the way that an AI can cluster metadata it could probably verify shit pretty fucking well. A cluster of phone calls, a regional market shift etc could be physical direct feedback that something has happened in a location. I'm just making it up as I go here though but with enough data to work with an AI could tell the difference between events that happened and events that you made up. Congratulations, by your first sentence you have already introduced political bias into your AI. If it follows government directives then the Armenian Genocide never happened as well. Multiple sources can repeat the same lie, easily. Especially real time sources like Twatter. Not to mention how you could abuse that system to make up lies if you figure out how the system crawls. Look at "search engine optimization" being an actual field to see that AIs can be manipulated to do what a third party wants it to do. On August 16 2016 06:07 Gorsameth wrote:On August 16 2016 06:01 LegalLord wrote:On August 16 2016 05:56 KwarK wrote:On August 16 2016 05:49 Plansix wrote: Humans can’t even agree on what objectivity is. We cannot obtain objectivity on our own within the complexity of our own mind. There is no way we can create an AI do to it for us. We will just create a thing that believes it is devoid of bias and therefore objective.
And frankly we have enough of those on the internet already, committing of video game subreddits.
We don't need every single person to agree to understand what would make news good news. "A car bomb killed 64 people in a market in Baghdad at 07:14 local time this morning" is a fairly simple statement of what happened, right? Whereas "Cowardly terrorists murdered dozens of innocent civilians in the latest attack in this spree of similar atrocities" includes slanted and subjective language, misuses legal terminology, sacrifices accuracy for vague hyperbole and expands upon the basic facts of the event to paint a broader picture. I'm sure a journalism or communications major here can tell us how language can be constructed to evoke a specific response. Language cannot be defined in a way specific enough to computers that you can use it that way. And what about stories that are lies? Is the statement "US bombers destroy terror cell in Aleppo" a biased statement? What if it's debated whether a group is a terrorist organization in the first place, such as the Kurds? What if it didn't actually happen? Yes, because a terrorist is not an objective description (one mans freedom fighter is another mans terrorist and all that). "US bombers kill 13 in Aleppo bombing, US official claims terrorist connections, ISIS denies" is a much less biased statement. Then you will have a hell of a hard time assigning any label to anything. "US claims it is a nation, ISIS denies." You're assuming that with time, effort, money and continual improvement all the flaws would remain unfixed. I think otherwise. With enough time, enough examples and enough access to data a computer would be able to find verifiable primary evidence that makes it believe a given thing is true. As for the Armenian Genocide. It is factually true that the Turkish government denies the Armenian Genocide. Nothing wrong with a headline reading "Today a memorial service was held for the victims of the event the UN recognizes as the Armenian Genocide, a name Turkey disputes". Eventually perhaps, but it's not something within the ability of current AI to do. Unless you can define some form of logic that describes truth, news-worthiness, and lack of bias, computers can't make it happen. And that is well beyond what current computers can do, which basically amounts to clever solutions to well-defined but difficult problems.
You would qualify yourself so many times over if you kept that up. Would "the US government says" be accurate if a few people in the government disagree? At some point you need some very high level judgment to know what you can actually omit.
|
On August 16 2016 06:20 Evotroid wrote:Show nested quote +On August 16 2016 06:17 RoomOfMush wrote:On August 16 2016 05:40 Evotroid wrote:On August 16 2016 05:35 RoomOfMush wrote:On August 16 2016 05:29 Evotroid wrote:On August 16 2016 05:11 RoomOfMush wrote:On August 16 2016 04:40 Gorsameth wrote:On August 16 2016 04:38 RoomOfMush wrote: Its pretty much impossible to do (good) news with artificial intelligence. For a computer this is a task which can only be managed with the help of statistics and heuristics. Both of which can easily be manipulated by anybody who knows the algorithms behind it. And how would it not be able to check that proper procedure was followed? Or discard those that do not provide adequate documentation of said procedure? Because thats how computers work. The computer can only do the things you teach it to do. If the human programmer does not know how to check pieces of news for proper procedure then its impossible for the AI to do it too. And the AI can not use gut feeling. You have to define a step-by-step guide of atomic actions to take to determine whether a piece of news is trustworthy, correct, relevant, etc to teach your news-AI how to work. I dont know how you could evaluate all news in such a way but if you know it then please explain it to me. (...) That is not actually how computers work. There is no inherent limitation of that kind, and in fact, there are ways to show a program the desired outcome, a bunch of problems, and let it figure out how to solve the problems even if the programmer himself does not know how to solve them. (though it is not necessarily true in this case). No. How do you come up with an idea like that? Computers are very very simple tools, they perform one action at a time and the actions are incredibly simple. They can not come up with anything and they can not expand themselfs. Every piece of code in software has to come from a programmer. The software can never do anything that was not put there by a programmer. That is the fundamental idea behind all computers of today. Of course there is theoretical stuff of computers that actually use true randomness to produce results which are not deterministic (at least to our current understanding of physics) but these are not useable in real life at this point in time. All computers we have a fully deterministic and work sequentially. I don't "come up" with stuff like that, unlike you. Being fully deterministic and working sequentially has nothing to do with the ability of self programming or the like. And to show that I am not talking out of my ass: Link to pdf from caltech, second on google after a paywalled article Did you actually read that article? I just did and I can tell you that it says exactly what I have been saying. The AI described in that paper uses an incredibly simple algorithm with a very well defined set of rules and deterministic behavior coupled with statistical analysis and heuristics to calculate good matches. Its very much not able to "think" or come up with anything outside of its expected behavior. Surely, if the expected behaviour is to solve the problem, I mean, I do not argue that it could rise against it's programmers, just that it can solve problems, without the programmer directly telling it how to solve them. But the paper does not say that. The paper says exactly what the algorithm does and the algorithm does exactly what the programmers told it to do.
|
On August 16 2016 06:06 Plansix wrote:Show nested quote +On August 16 2016 05:58 GoTuNk! wrote:On August 16 2016 03:52 Plansix wrote:On August 16 2016 03:43 Godwrath wrote:On August 16 2016 03:30 LegalLord wrote: I wouldn't mind a government news channel being made, and in general I think a direct government mouthpiece is a good thing. Most Americans would lose their shit if one was to be proposed though. Hmm, probably is my country bias talking here, but i don't think news channels tied to the goverment are a good idea. They will end up being as propaganda tool for the ruling party. To be perfectly honest, if the nation that sent people to the moon can’t create a publicly run news network that can survive more than one administration, we might as well just quit right now and go back to being ruled by the UK. The BBC is fine. But they also have 50 years of public trust built up behind them. Seriously, think about that. If we can’t trust our government to build an independent entity that sole purpose is to keep the public informed, why do we trust them with anything? We entrust them with the power of lethal force, but not the power to provide information to the public. I find your belief in "government" as some magnanimous omnipowerful entity mind blowing. In the total opposite with myself, as I believe it is an organized entity with the monopoly of force to simply serve itself and it's members, trough cohersion. Shows why I'm close to libertarian and you belong to the left, as I honestly believe government entities are all incompetent (relative to their private counter parts) or just flat out evil and thirsty of power. Just wanted to point that out. Well cynicism is the refuge of those afraid to put their faith in something because they could be let down. The libertarian is simply a cynic who only believes in themselves and claims to believe in others. And a Republican you block on Facebook because they keep commenting on your posts. Really, libertarians are the vegans of the political world. I don't mind they exist, but holy fuck I don't care.
You really bleed blue more than anyone I've ever met
|
Hungary176 Posts
Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!"
Please elaborate on what you mean, of course it did what the programmers told it to do, otherwise how would it be useful?
For example: "Roboticists, for example, may not know the best way to make a two-legged robot walk. In that case, they could design an algorithm that experiments with a number of different gaits. If a particular gait makes the robot fall down, the algorithm learns to not do that any more."
Is what I am talking about, of course it does what the programmers told it to do, it tried out enough behavioural variations until it learned the best method to achieve the desired result (being able to walk) without the need for the programmers to know, how it should behave for the desired result.
|
On August 16 2016 06:22 oBlade wrote:Show nested quote +On August 16 2016 06:09 Gorsameth wrote:On August 16 2016 06:07 oBlade wrote:On August 16 2016 05:53 Mohdoo wrote:On August 16 2016 05:52 zlefin wrote: So, from what I read of trump's speech today; it seemed rather bland and unremarkable. not as crazy as his usual stuff, but nothing notably helpful either. The question no one seems able to answer is: What is something Trump can say that will help him? What can he do that pulls him out of the ditch? He seems totally trapped and doesn't appear to really have anything he can do. There are people who in no universe will ever conceive of voting for him, but that's not who he has to go after to win. Could someone who was going to vote for Trump but considered the Khan comments to be unfit of a president be convinced to switch back to Trump? I would find it hard to believe anything Trump says could appease those sort of people. And they are the people he needs to win back. Yes, time heals all wounds. The issue is we know the media isn't going to change in 3 months, so it falls to him to avoid these traps that then get printed in every newspaper and run and commented on every cable news station for days. Once the debates get going, the race takes a different tone. So your answer to the question "what can Trump do to win back voters" is 'wait until they forget all the terrible things he said'? Hardly a convincing strategy.
As for your believe that the debates will change the game. You might want to tell Trump that since he hardly seems confident in them himself.
|
On August 16 2016 06:30 Evotroid wrote: Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!"
What do you mean by programming it to be 'indeterministic'?
|
On August 16 2016 06:26 GGTeMpLaR wrote:Show nested quote +On August 16 2016 06:06 Plansix wrote:On August 16 2016 05:58 GoTuNk! wrote:On August 16 2016 03:52 Plansix wrote:On August 16 2016 03:43 Godwrath wrote:On August 16 2016 03:30 LegalLord wrote: I wouldn't mind a government news channel being made, and in general I think a direct government mouthpiece is a good thing. Most Americans would lose their shit if one was to be proposed though. Hmm, probably is my country bias talking here, but i don't think news channels tied to the goverment are a good idea. They will end up being as propaganda tool for the ruling party. To be perfectly honest, if the nation that sent people to the moon can’t create a publicly run news network that can survive more than one administration, we might as well just quit right now and go back to being ruled by the UK. The BBC is fine. But they also have 50 years of public trust built up behind them. Seriously, think about that. If we can’t trust our government to build an independent entity that sole purpose is to keep the public informed, why do we trust them with anything? We entrust them with the power of lethal force, but not the power to provide information to the public. I find your belief in "government" as some magnanimous omnipowerful entity mind blowing. In the total opposite with myself, as I believe it is an organized entity with the monopoly of force to simply serve itself and it's members, trough cohersion. Shows why I'm close to libertarian and you belong to the left, as I honestly believe government entities are all incompetent (relative to their private counter parts) or just flat out evil and thirsty of power. Just wanted to point that out. Well cynicism is the refuge of those afraid to put their faith in something because they could be let down. The libertarian is simply a cynic who only believes in themselves and claims to believe in others. And a Republican you block on Facebook because they keep commenting on your posts. Really, libertarians are the vegans of the political world. I don't mind they exist, but holy fuck I don't care. You really bleed blue more than anyone I've ever met In the of Harvard yuppie liberal, I am considered very middle of the road. I understand the distrust of government. I just distrust public multinational media companies who are required by law profit any way possible way more than government.
|
On August 16 2016 06:30 Evotroid wrote: Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!" Well then you use the word "determinism" incorrectly. Because that is not what determinism means. You mean "randomness" but that is not non-determinism. If you really want to continue this discussion you should first read up on the definitions of the words you are using.
|
Hungary176 Posts
On August 16 2016 06:34 RoomOfMush wrote:Show nested quote +On August 16 2016 06:30 Evotroid wrote: Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!" Well then you use the word "determinism" incorrectly. Because that is not what determinism means. You mean "randomness" but that is not non-determinism. If you really want to continue this discussion you should first read up on the definitions of the words you are using.
Is wiki allowed in this discussion? en.wikipedia.org Also, see edit on my prev post.
|
Yeah, there is no such thing as determinism in the practical sense. It is a term rooted in philosophical discussions of free will.
Edit: That is not how you are using the word. That is a description of an program that produces the same results when provided with the same input. The reason they use "deterministic" is a label. Its not even a very good label since an algorithm, by nature, cannot believe it has free will.
edit: Ok, I just read the reasons why an algorithm would stop being deterministic and they can all be summed up with "Something fucked up."
|
On August 16 2016 06:30 Evotroid wrote: Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!"
Please elaborate on what you mean, of course it did what the programmers told it to do, otherwise how would it be useful?
For example: "Roboticists, for example, may not know the best way to make a two-legged robot walk. In that case, they could design an algorithm that experiments with a number of different gaits. If a particular gait makes the robot fall down, the algorithm learns to not do that any more."
Is what I am talking about, of course it does what the programmers told it to do, it tried out enough variations until it learned the best method to achieve the desired result (being able to walk) without the need for the programmers to know, how it should behave for the desired result.
This is a learning algorithm (or a I guess a neural network?), and it is still performing completely within expected results.
Only in this case, expected results are not "do step A, B, C", but "Here is the failure state. Here is the Success state. Here are the five thousand things you can do, try them randomly until you reach Success state."
|
On August 16 2016 06:37 Evotroid wrote:Show nested quote +On August 16 2016 06:34 RoomOfMush wrote:On August 16 2016 06:30 Evotroid wrote: Okay, I am way out of my depth with this philosophic bs. I speak of determinism in the practical sense. Eg.: a machine that in practice always gives the same answer, and one that does not to the same input. And remember, we are speaking AI in the practical world, not soul in philosophy class. We develop an AI, and want it to have indeterministic properties. After we have built the AI we test it to see how well it works, and find out it does not, because it is deterministic. We get new hardware on warranty, or file a bugreport, no one is gonna sit around with open mouth babbling about "ohh but you can never know, maybe if we just test it for 5 more years!" Well then you use the word "determinism" incorrectly. Because that is not what determinism means. You mean "randomness" but that is not non-determinism. If you really want to continue this discussion you should first read up on the definitions of the words you are using. Is wiki allowed in this discussion? en.wikipedia.orgAlso, see edit on my prev post. It seems wikipedia has it right:
a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output Please show me a computer program (in code) which is not deterministic. If you can do that you will probably get a million $$$ for that.
|
Yeah, lets just limit definitions until you are right. Sorry, but the AI this discussion was originating from could easily use a hardware RNG. And if you consider this part of this AI, it is certainly not deterministic anymore.
|
|
|
|