|
Read the rules in the OP before posting, please.In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up! NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious. Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action. |
United States42419 Posts
On August 21 2017 02:37 Falling wrote: Well alright, so the the 'real' communists are passive deterministic observers- which still means anyone who is actively trying to cause communism is doing something fundamentally dangerous to society.
As for the second- as as I'm aware they started with the abolition of property and ended with the state run farm. That speaks more to the untenability of abolishing property rights than any lack of true belief. (But maybe I'm wrong on that front.) I mean pretty much anyone who says "things would be better if I was in charge and could forcibly remake the world to conform to my vision of how it should be" is pretty fucking dangerous. Revolutionary communists are definitely in that camp. I think the same arguments against Nazis can be levied against revolutionary communists. Ultimately in both cases their participation in democratic institutions is only to subvert them. While there is an argument to be made for absolute rights to participation, I think there is also an argument to be made that if a party is attempting to gain power and influence within a democratic society by taking advantage of the rights afforded by that society in order to overthrow that society, their participation should be limited.
Nazis don't believe in democratic society, they'll happily take advantage of the rights it gives them as a way to gain momentum but their end goal is revolutionary, they intend to overthrow it and establish a totalitarian regime that can purge non whites. Same for Bolsheviks obviously, they'll participate in the Duma but only in as much as it'll help them abolish the Duma and replace it. There's certainly an argument to make against affording those movements the same democratic privileges as others. My main problem with that is the potential for abuse. As I said a few days ago, I'm not sure that there's any restriction I could come up with that wouldn't have been illegally and incorrectly used as a weapon against the civil rights movement.
|
Canada11340 Posts
Oh hey! We agree on something
|
Is hiring the biggest marketing bureau to subliminally indoctrinate everyone into your ideology the way to do it, then?
It's the only way man, I'm tellin ya!
Government won't ever change anything than stay largely status quo (swaying a bit) and the general public is unaware/uncaring/incapable anyway.
On August 21 2017 02:42 KwarK wrote: Nazis don't believe in democratic society, they'll happily take advantage of the rights it gives them as a way to gain momentum but their end goal is revolutionary, they intend to overthrow it and establish a totalitarian regime that can purge non whites. Same for Bolsheviks obviously, they'll participate in the Duma but only in as much as it'll help them abolish the Duma and replace it. There's certainly an argument to make against affording those movements the same democratic privileges as others. My main problem with that is the potential for abuse. As I said a few days ago, I'm not sure that there's any restriction I could come up with that wouldn't have been illegally and incorrectly used as a weapon against the civil rights movement. In Belgium we have a far right party called "The Flemish Interest" (loosely translated). The parties have an accord in which they exclude to ever form a coalition with that party because of its dangerous and inflammatory ideas. While they've changed their party policies in the meantime, it's still pretty much a certainty that the other parties won't ever work together with them. https://en.wikipedia.org/wiki/Cordon_sanitaire#In_politics
|
President Donald Trump’s job approval ratings are sagging in three crucial states that helped him secure the presidency last year, according to NBC News/Marist polls released Sunday.
Trump’s approval is below 40 percent in Michigan (36 percent), Pennsylvania (35 percent) and Wisconsin (34 percent), according to the surveys, conducted in the four days after a violent white supremacist march last weekend in Charlottesville, Virginia. That time span includes Trump’s alternating responses to the episode, which drew widespread criticism from elected officials in Washington.
In all three states, Trump's disapproval ratings exceed 50 percent. The three states provided Trump 46 electoral votes in the 2016 election, arguably his margin of victory.
Voters in the three states — by margins ranging from 8 to 13 percentage points — told pollsters they would prefer that Democrats control Congress after the 2018 elections. Nearly two-thirds of those polled in each state say they’ve been embarrassed by Trump’s performance as president. Though they rate him about even on his handling of the economy, more than half say he’s weakened the country’s standing in the world.
In Michigan, about the same number of voters say they have a favorable view of Trump (34 percent) as they do about Kid Rock, a prospective Republican Senate candidate (36 percent) in the 2018 midterms.
Each of the three polls relied on a sample of about 800 registered voters, and each carries a margin of error of plus or minus 3.5 percentage points.
Source
|
United States24641 Posts
What would it take for his approval rating to go below, I don't know, ten percent? I feel like if he walked around Washington stabbing random people nonstop for a month, killing thousands of innocents in the process, he would take a 1-2% dip in job approval ratings. Either it's not an effective measuring tool for anything useful or there are no actual standards for doing a non-terrible job in Washington.
|
United Kingdom13775 Posts
On August 21 2017 03:17 micronesia wrote: What would it take for his approval rating to go below, I don't know, ten percent? I feel like if he walked around Washington stabbing random people nonstop for a month, killing thousands of innocents in the process, he would take a 1-2% dip in job approval ratings. Either it's not an effective measuring tool for anything useful or there are no actual standards for doing a non-terrible job in Washington. If he presided over a complete and utter collapse of the US economy along with everything that happened now I think that would do it .
|
Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots.
Tesla’s Elon Musk and Google’s Mustafa Suleyman are leading a group of 116 specialists from across 26 countries who are calling for the ban on autonomous weapons.
The UN recently voted to begin formal discussions on such weapons which include drones, tanks and automated machine guns. Ahead of this, the group of founders of AI and robotics companies have sent an open letter to the UN calling for it to prevent the arms race that is currently under way for killer robots.
In their letter, the founders warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the “third revolution in warfare” after gunpowder and nuclear arms.
The founders wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Experts have previously warned that AI technology has reached a point where the deployment of autonomous weapons is feasible within years, rather than decades. While AI can be used to make the battlefield a safer place for military personnel, experts fear that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.
The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action, after the UN was forced to delay a meeting that was due to start Monday to review the issue.
The founders call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.
Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales in Sydney, said: “Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis.
“However, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want.”
Musk, one of the signatories of the open letter, has repeatedly warned for the need for pro-active regulation of AI, calling it humanity’s biggest existential threat, but while AI’s destructive potential is considered by some to be vast it is also thought be distant.
Ryan Gariepy, the founder of Clearpath Robotics said: “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems. Two years ago the conference was used to launch an open letter signed by thousands of AI and robotics researchers including Musk and Stephen Hawking similarly calling for a ban, which helped push the UN into formal talks on the technologies.
The UK government opposed such a ban on lethal autonomous weapons in 2015, with the Foreign Office stating that “international humanitarian law already provides sufficient regulation for this area”. It said that the UK was not developing lethal autonomous weapons and that all weapons employed by UK armed forces would be “under human oversight and control”.
While the suggestion of killer robots conjures images from science fiction such as the Terminator’s T-800 or Robocop’s ED-209, lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone.
The fixed-place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. But it is not the only autonomous weapon system in development, with prototypes available for land, air and sea combat.
The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy. The unmanned combat aerial vehicle, about the size of a BAE Hawk, the plane used by the Red Arrows, saw its first test flight in 2013 and is expected to be operational sometime after 2030 as part of the Royal Air Force’s Future Offensive Air System, destined to replace the human-piloted Tornado GR4 warplanes.
Russia, the US and other countries are currently developing robotic tanks that can either be remote controlled or operate autonomously. These projects range from autonomous versions of the Russian Uran-9 unmanned combat ground vehicle, to conventional tanks retrofitted with autonomous systems.
The US’s autonomous warship, the Sea Hunter built by Vigor Industrial, was launched in 2016 and while still in development is intended to have offensive capabilities including anti-submarine ordnance. Under the surface, Boeing’s autonomous submarine systems built on the Echo Voyager platform are also being considered for long-range deep-sea military use.
Source
|
On August 21 2017 03:17 micronesia wrote: What would it take for his approval rating to go below, I don't know, ten percent? I feel like if he walked around Washington stabbing random people nonstop for a month, killing thousands of innocents in the process, he would take a 1-2% dip in job approval ratings. Either it's not an effective measuring tool for anything useful or there are no actual standards for doing a non-terrible job in Washington. there are very few actual standards. humans' cognitive biases largely prevent them from accepting that the people on their side that they support are bad; especially in terms of approval ratings. or if necessary, they assert the "other" side is worse, again in iorder to maintain their viewpoint; or assert that their guy is bad as a person, but acceptable/good enough because they're accomplishing some desired goal in the job. even when nixon resigned in disgrace he had quite a few supporters and was polling around 25% approval. (iirc he had >50% amongst republicans)
it has substantial issues as a measuring tool but isn't entirely worthless if you know ohw to read and interpret them. it also of course depends on what you're trying to measure.
do you want any more info on it?
|
I'm relatively sure that terrorists don't have the resources (knowledge, materials and funds) to make an actual killer robot. Or we'd have seen one by now. It is a smart appeal though, It'd be horrific to see a robot dog kill off the citizens/soldiers/rebels of the country the US decides to interfere with next. However, on the other side of the coin, we're getting closer to a Robot Wars scenario where countries could duke it out on an impartially decided arena. Would definitely watch nations throw away billions of dollars to gain some sort of supremacy over one another.
|
Given the political difficulties created drone strikes, I agree developing further autonomous weapons is a risky idea. There are endless ethical problems and those weapons will be deployed in mostly one sided conflicts. Short range remote control weapons have a place in our modern military. But I never want to get to an era where the US is deploying AI driven tanks over seas to fight terrorism.
Also, the arms race to create weapons to disrupt control of these AI tanks might only make them useful against poor nations that cannot afford the R&D or those weapons.
|
On August 21 2017 03:26 Plansix wrote: Given the political difficulties created drone strikes, I agree developing further autonomous weapons is a risky idea. There are endless ethical problems and those weapons will be deployed in mostly one sided conflicts. Short range remote control weapons have a place in our modern military. But I never want to get to an era where the US is deploying AI driven tanks over seas to fight terrorism. Imagine the uproar when it blows up a school in attendance by accident; or a hospital. Somehow I believe the people responsible will still get away with just an apology.
|
On August 21 2017 03:28 Uldridge wrote:Show nested quote +On August 21 2017 03:26 Plansix wrote: Given the political difficulties created drone strikes, I agree developing further autonomous weapons is a risky idea. There are endless ethical problems and those weapons will be deployed in mostly one sided conflicts. Short range remote control weapons have a place in our modern military. But I never want to get to an era where the US is deploying AI driven tanks over seas to fight terrorism. Imagine the uproar when it blows up a school in attendance by accident; or a hospital. Somehow I believe the people responsible will still get away with just an apology. The optics of the whole thing is terrible. Richest country in the world builds AI tanks to assure its citizens are not at risk while using them to attack poor nations that have terrorist. Just look how much shit we get for drone strikes now. Now we build Skynet lite, but deploy it to the middle east only.
|
On August 21 2017 03:24 Uldridge wrote: I'm relatively sure that terrorists don't have the resources (knowledge, materials and funds) to make an actual killer robot. Or we'd have seen one by now. It is a smart appeal though, It'd be horrific to see a robot dog kill off the citizens/soldiers/rebels of the country the US decides to interfere with next. However, on the other side of the coin, we're getting closer to a Robot Wars scenario where countries could duke it out on an impartially decided arena. Would definitely watch nations throw away billions of dollars to gain some sort of supremacy over one another. Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S.
|
What could go wrong!?
On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines.
|
On August 21 2017 03:32 Uldridge wrote: What could go wrong!? Suicide bombers are bad because you can only use them once. What if we developed software and a sensor package that could remove the human from the equation? And then we started a bidding war to see who could make it the cheapest. How could this go wrong?
On August 21 2017 03:32 Uldridge wrote:What could go wrong!? Show nested quote +On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines.
My grandfather used to run a company that did military R&D for optics(sights for guns on boats or para-scopes). They made those things to be used by people with a 6th grade reading level. If this is developed for the military, there is a good chance it will be brain dead easy to use. And they will make it to be mass produced in some way.
|
Yeah exactly, what could go wrong?! We still want those resources, right? Isn't a faceless purge once in a while good for everyone? Ok I'll stop being so moronically cynical now.
I'm not arguing about a preset AI killing machine, I was responding to actually making a killer robot from scratch. Of course getting one hand delivered (with an instruction manual attached to it) will be pretty easy to utilize.
|
On August 21 2017 03:36 Plansix wrote:Suicide bombers are bad because you can only use them once. What if we developed software and a sensor package that could remove the human from the equation? And then we started a bidding war to see who could make it the cheapest. How could this go wrong? Show nested quote +On August 21 2017 03:32 Uldridge wrote:What could go wrong!? On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines. My grandfather used to run a company that did military R&D for optics(sights for guns on boats or para-scopes). They made those things to be used by people with a 6th grade reading level. If this is developed for the military, there is a good chance it will be brain dead easy to use. And they will make it to be mass produced in some way. Couldn't you already just strap a triggered bomb onto a drone from someplace like Best Buy and pilot into position?
|
United Kingdom13775 Posts
On August 21 2017 03:32 Uldridge wrote:What could go wrong!? Show nested quote +On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines. The software on AI is really the most important part. The hardware is fairly straightforward; a few standard robotic parts put together in a generally, often universally, useful capacity. But how smart it tends to be is the real trick.
AI labs tend to have some standardized robots that they can use for a wide range of purposes, for example. The difference is always what software they load into it.
|
Norway28621 Posts
On August 21 2017 03:32 Uldridge wrote:What could go wrong!? Show nested quote +On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines.
being the first to develop a killer robot does indeed sound like an amazingly difficult task. But isn't the thing with electronics and programming that it's kinda easy to replicate? I mean missile technology does seem like a pretty damn well guarded secret, but they also blow up, so you can't reverse engineer one.
|
On August 21 2017 03:42 Gahlo wrote:Show nested quote +On August 21 2017 03:36 Plansix wrote:On August 21 2017 03:32 Uldridge wrote: What could go wrong!? Suicide bombers are bad because you can only use them once. What if we developed software and a sensor package that could remove the human from the equation? And then we started a bidding war to see who could make it the cheapest. How could this go wrong? On August 21 2017 03:32 Uldridge wrote:What could go wrong!? On August 21 2017 03:32 warding wrote: Uranium and nuclear weapons are hard to get. Electronics is cheap and software is free to transfer. It isn't a stretch to imagine an enemy/rogue state or a criminal organization in a failed state developing autonomous weapons and feeding them to terrorist organizations to mess with israel or the U.S. I doubt you understand what goes into making an AI. You don't just implement AI by pressing install and fuse some electronics together and put it in a husk. You need mechanical, software and electrical engineers to fix that stuff. It's pretty difficult to make de novo able autonomous killing machines. My grandfather used to run a company that did military R&D for optics(sights for guns on boats or para-scopes). They made those things to be used by people with a 6th grade reading level. If this is developed for the military, there is a good chance it will be brain dead easy to use. And they will make it to be mass produced in some way. Couldn't you already just strap a triggered bomb onto a drone from someplace like Best Buy and pilot into position? It is only a matter of time before someone tries that. I am sort of surprised it hasn't happened yet.
|
|
|
|