Henry Kissinger pens ominous warning on dangers of artificial intelligence
What if machines learn to communicate with each other? What if they begin to establish their own objectives? What if they become so intelligent that they are making decisions beyond the capacity of the human mind?
Those are some of the questions the 95-year-old Kissinger poses in a piece published by the Atlantic under the apocalyptic headline: ‘How The Enlightenment Ends.’
Kissinger’s interest in artificial intelligence began when he learned about a computer program that had become an expert at Go — a game more complicated than chess. The machine learned to master the game by training itself through practice; it learned from its mistakes, redefined its algorithms as it went along — and became the literal definition of ‘practice makes perfect.’
Into the unknown
We are, Kissinger warns, in the midst of a “sweeping technical revolution whose consequences we have failed to fully reckon with and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.”
Kissinger uses the example of a self-driving car. Driving a car requires judgements in impossible-to-predict circumstances. What would happen, he asks, if the car found itself having to decide between killing a grandparent or killing a child? Who would it choose, and why?
Artificial intelligence goes “far beyond” the kind of automation we are used to, he says, because AI has the ability to “establish its own objectives,” which makes it “inherently unstable.” In other words, through its processes, AI “develops an ability previously thought to be reserved for human beings.”
If you want something non party political to get your intellectual juices flowing this Monday morning, I recommend this by Henry Kissinger on the philosophical questions that Artificial Intelligence poses for the human race. https://t.co/5COtL1vlY7— Nicola Sturgeon (@NicolaSturgeon) July 9, 2018
The typical science-fiction narrative is that robots will develop to the point where they turn on their creators and threaten all of humanity — but, according to Kissinger, while the dangers of AI may be great, the reality of the threat may be a little more benign. It is more likely, he suggests, that the danger will come from AI simply misinterpreting human instructions “due to its inherent lack of context.”
One recent example is the case of the AI chatbot called Tay. Instructed to generate friendly conversation in the language patterns of a 19-year-old girl, the machine ended up becoming racist, sexist and giving inflammatory responses. The risk that AI won’t work exactly according to human expectations could, Kissinger says, “cascade into catastrophic departures” from intended outcomes.
Too clever for humans
The second danger is that AI will simply become too clever for its own good — and ours. In the game Go, the computer was able to make strategically unprecedented moves that humans had not yet conceived. “Are these moves beyond the capacity of the human brain?” Kissinger asks. “Or could humans learn them now that they have been demonstrated by a new master?”
Brilliant. Henry Kissinger on the deep philosophical questions, and huge potential threat, that artificial intelligence poses to the human race. And he’s only 95 years old ... https://t.co/uzEVqm5ZUr— George Osborne (@George_Osborne) July 8, 2018
The fact is that AI learns much faster than humans. Another recent example was a computer program AlphaZero, which learned to play chess in a style never before seen in chess history. In just a few hours, it reached a level of skill that took humans 1,500 years to reach — after being given only the basic rules of the game.
This exceptionally fast learning process means AI will also make more mistakes “faster and of greater magnitude than humans do.” Kissinger notes that AI researchers often suggest that those mistakes can be tempered by including programming for “ethical” and “reasonable” outcomes — but what is ethical and reasonable? Those are things that humans are still fighting over how to define.
No way to explain
What happens if AI reaches its intended goals but can’t explain its rationale? “Will AI’s decision-making abilities surpass the explanatory powers of human language and reason?” Kissinger asks.
He argues that the effects of such a situation on human consciousness would be profound. In fact, he believes it is the most important question about the new world we are facing.
“What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?”
Legalities and opportunities
Outside of philosophical concerns, Kissinger also outlines some legal ones: in this vastly different world, who will be responsible for the actions of AI? How will liability be determined for their mistakes? Can a legal system designed by humans even keep pace with a world run by artificial intelligence capable of outthinking them?
But it’s not all doom and gloom. Kissinger admits that AI can bring “extraordinary benefits” to medical science, provision of clean energy and environmental issues, among other areas.
Kissinger acknowledges that scientists are more concerned with pushing the limits of discovery than comprehending them or pondering their philosophical ramifications. Governments, too, are more concerned with how AI can be used — in security and intelligence, for example — than examining its results on the human condition.
In his final pitch, the senior diplomat implores the US government to make artificial intelligence a major national focus, “above all, from the point of view of relating AI to humanistic traditions.”
He argues that a presidential commission of eminent thinkers in the field should be established to help develop a “national vision” for the future. “If we do not start this effort soon,” Kissinger writes, “before long we shall discover that we started too late.”