Secret Pentagon program will use AI to predict and detect enemy missile launches
While the sensitive nature of the research means it is still shrouded in secrecy, multiple Department of Defense sources told Reuters that several programs are currently underway, all aimed at using artificial intelligence to anticipate and warn of enemy missile launches.
Computer systems would scour through vast amounts of data, such as drone footage or satellite imagery, much faster and more accurately than humans. In one pilot program focused on North Korea, AI is used to locate and track mobile missiles that can be hidden in tunnels, forests, and caves. AI then assesses whether the activity constitutes an immediate threat, and alerts commanders.
Once signs of a missile launch are detected, the US government would then have time to either pursue diplomatic options or move in and destroy the missiles, ideally before they even leave the ground.
The Trump administration has proposed tripling funding for one AI-driven missile program next year to $83 million. While $83 million may seem like a modest sum, it funds just one of many hush-hush programs, and represents Washington’s growing interest in military AI technology.
However, not everyone is as gung-ho about developing military AI. Earlier this week, Google canceled a controversial AI contract with the Pentagon after receiving backlash from its employees. In a letter to management, 3,000 Google staff said that the company “should not be in the business of war,” adding that working with the military goes against the tech giant’s “Don’t be evil” ethos.
Under the contract, Google and the Department of Defense worked together on ‘Project Maven,’ an AI program that would improve the targeting of drone strikes. The program would analyze video footage from drones, track the objects on the ground, and study their movement, applying the techniques of machine learning. Anti-drone campaigners and human rights activists complain that Maven would pave the way for AIs to determine targets on their own, completely removing humans from the ‘kill chain.’
There are other risks too. Developing AI technology could provoke an arms race of sorts with Russia or China. The technology is also still in its infancy, and could make mistakes. US Air Force General John Hyten, the top commander of US nuclear forces, said that once such systems are operational, human safeguards will still be needed to control the ‘escalation ladder’ – the process through which a nuclear missile is launched.
“[Artificial intelligence] could force you onto that ladder if you don’t put the safeguards in,” Hyten said in an interview. “Once you’re on it, then everything starts moving.”
The dangers inherent in allowing AI to make life-or-death decisions were highlighted by an MIT study that found an AI neural network could be easily fooled into thinking a plastic turtle was actually a rifle. Hackers could theoretically exploit this vulnerability, and force an AI-driven missile system to attack the wrong target.
Regardless of the potential human cost of error, the Pentagon is pressing ahead with its research. Some officials interviewed by Reuters believe that elements of the AI missile program could become operational by the early 2020s.
Others believe that the government is not investing enough.
“The Russians and the Chinese are definitely pursuing these sorts of things,” Rep. Mac Thornberry (R-Texas), the House Armed Services Committee’s chairman, told Reuters. “Probably with greater effort in some ways than we have.”
If you like this story, share it with a friend!