icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
24 Feb, 2020 22:46

At least they'll have an off switch: Pentagon adopts ‘AI Ethical Principles’ for its killer robots

At least they'll have an off switch: Pentagon adopts ‘AI Ethical Principles’ for its killer robots

As the United States looks to develop artificial intelligence weapons to keep up with Russia and China, the Pentagon has adopted a set of guidelines that it says will keep its killer androids under human control.

The Department of Defense adopted a set of “Ethical Principles for Artificial Intelligence” on Monday, following the recommendations by the Defense Innovation Board last October.

"AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior,” Defense Secretary Mark Esper said in a statement.

Also on rt.com Make no mistake: Military robots are not there to preserve human life, they are there to allow even more endless wars

According to the Pentagon, its future AI projects will be “Responsible, Equitable, Traceable, Reliable,” and “Governable,” as the Defense Innovation Board recommended. Some of these five principles are straightforward (once you decipher the Pentagon-speak anyway); “Governable,” for example, means that humans must be able to flip the off switch on “systems that demonstrate unintended behavior.” But others are more ambiguous.

What exactly the department will do to “minimize unintended bias in AI capabilities,” as it says it will do to keep these systems “equitable,” is vague, and may cause problems down the line if left undefined. Trusting a machine to scan aerial imagery in search of targets is a legal and ethical minefield, and Google already pulled out of a Pentagon project in 2018 that would have used machine learning to improve the targeting of drone strikes.

Similarly, the Pentagon’s promise that its staff will “exercise appropriate levels of judgment and care” when developing and fielding these new weapons is a lofty, but ultimately meaningless pledge.

The adoption of a loose set of ethical principles instead of an outright ban will leave some campaigners unsatisfied. Many leading pioneers of AI – such as Demis Hassabis at Google DeepMind and Elon Musk at SpaceX – are among more than 2,400 signatories to a pledge that outright opposes the development of autonomous weapons. Numerous other open letters and petitions against military AI have been filed worldwide in recent years.

Also on rt.com White House no longer wants to rely on Google & IBM to weaponize AI – and it’s ready to spend money to poach top talent

Resistance from the tech industry presents the Pentagon with a practical dilemma, as well as an ethical one. Despite pumping increasing sums of money into developing AI systems, the US believes Russia and China are ahead and will extend their lead in this domain if the Defense Department can’t recruit the talent needed to compete.

To counter the brain drain, the Trump administration’s proposed $4.8 trillion 2021 budget would hike the Defense Advanced Research Projects Agency’s (DARPA) funding for AI-related research from $50 million to $249 million, and increase the National Science Foundation’s funding from $500 million to $850 million, with $50 million set aside specifically for AI.

Whatever devices DARPA comes up with, if this set of guidelines is followed, at least they’ll have an ‘off’ switch.

Think your friends would be interested? Share this story!

Podcasts
0:00
23:13
0:00
25:0