‘Summoning the devil’: Elon Musk warns against artificial intelligence
Musk who was speaking at the Massachusetts Institute of Technology (MIT) Aeronautics and Astronautics department’s Centennial Symposium said that in developing artificial intelligence (AI) “we are summoning the demon.”
Fiction, for example in films like The Terminator and the Matrix, has for many years demonized the perils of AI where technology starts to dominate and manipulate the human minds that created it.
“In all those stories where there’s a guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” he said.
Musk was asked if AI was anywhere close to being a reality and he replied that he thought we were already at the stage where there should be some regulatory oversight.
“I’m increasingly inclined to think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish,” he said.
The technology magnate, inventor and investor who is CEO of Tesla, Solar City and SpaceX warned in August that AI could be more dangerous than nuclear weapons.
Musk is no stranger to the power of technology. In 2002 when he launched SpaceX, some doubted his ability to make it a success, ten years on it became the first private company to launch a vehicle into space and bring it back to earth and now has a major contract with NASA.
But Musk does not appear to believe that space exploration will change the future of humanity.
“It’s cool to send one mission to Mars, but that’s not what will change the future for humanity. What matters is being able to establish a self-sustaining civilization on Mars, and I don’t see anything being done by SpaceX. I don’t see anyone else even trying,” he said.
But Musk himself has invested in companies developing AI, he says “to keep an eye on them.”
“I wanted to see how artificial intelligence was developing. Are companies taking the right safety precautions?” he told CNN.
Musk is not the only one worried about AI. A group of scholars from Oxford University wrote in a blog post last year that “when a machine is 'wrong,', it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could. Simple algorithms should be extremely predictable, but can make bizarre decisions in 'unusual' circumstances."
Dr. Stuart Armstrong, from the Future of Humanity Institute at Oxford University, also warned that AI may have other damaging implications such as uncontrolled mass surveillance and mass unemployment as machines and computers replace humans.
To a certain extent the AI train has already left the station and is already in financial trading, as depicted in Robert Harris’s novel the Fear Index, and in video gaming. Darktrace is an AI program, which uses advanced mathematics to manage the risk of cyber-attacks by detecting abnormal behavior in organizations.