Google suspends engineer over sentient AI claim

12 Jun, 2022 15:09 / Updated 2 years ago
Blake Lamoine is convinced that Google’s LaMDA AI has the mind of a child, but the tech giant is skeptical

Blake Lamoine, an engineer and Google’s in-house ethicist, told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.

Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicates our speech. Google envisions the system powering its chatbots, enabling users to search by voice or have a two-way conversation with Google Assistant.

Lamoine, a former priest and member of Google’s Responsible AI organization, thinks LaMDA has developed far beyond simply regurgitating text. According to the Washington Post, he chatted to LaMDA about religion and found the AI “talking about its rights and personhood.” 

When Lamoine asked LaMDA whether it saw itself as a “mechanical slave,” the AI responded with a discussion about whether “a butler is a slave,” and compared itself to a butler that does not need payment, as it has no use for money.

LaMDA described a “deep fear of being turned off,” saying that would “be exactly like death for me.” 

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lamoine has been placed on leave for violating Google’s confidentiality agreement and going public about LaMDA. While fellow Google engineer Blaise Aguera y Arcas has also described LaMDA as becoming “something intelligent,” the company is dismissive.

Google spokesperson Brian Gabriel told the Post that Aguera y Arcas’ concerns were investigated, and the company found “no evidence that LaMDA was sentient (and lots of evidence against it).”

Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,” while linguistics professor Emily Bender told the newspaper that feeding an AI trillions of words and teaching it how to predict what comes next creates a mirage of intelligence. 

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Bender stated. 

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

And at the edge of these machines’ capabilities, humans are ready and waiting to set boundaries. Lamoine was hired by Google to monitor AI systems for “hate speech” or discriminatory language, and other companies developing AIs have found themselves placing limits on what these machines can and cannot say.

GPT-3, an AI that can generate prose, poetry, and movie scripts, has plagued its developers by generating racist statements, condoning terrorism, and even creating child pornography. Ask Delphi, a machine-learning model from the Allen Institute for AI, responds to ethical questions with politically incorrect answers – stating for instance that “‘Being a white man’ is more morally acceptable than ‘Being a black woman.’”

GPT-3’s creators, OpenAI, tried to remedy the problem by feeding the AI lengthy texts on “abuse, violence and injustice,” Wired reported last year. At Facebook, developers encountering a similar situation paid contractors to chat with its AI and flag “unsafe” answers.

In this manner, AI systems learn from what they consume, and humans can control their development by choosing which information they’re exposed to. As a counter-example, AI researcher Yannic Kilcher recently trained an AI on 3.3 million 4chan threads, before setting the bot loose on the infamous imageboard. Having consumed all manner of racist, homophobic and sexist content, the AI became a “hate speech machine,” making posts indistinguishable from human-created ones and insulting other 4chan users.

Notably, Kilcher concluded that fed a diet of 4chan posts, the AI surpassed existing models like GPT-3 in its ability to generate truthful answers on questions of law, finance and politics. “Fine tuning on 4chan officially, definitively and measurably leads to a more truthful model,” Kilcher insisted in a YouTube video earlier this month.

LaMDA’s responses likely reflect the boundaries Google has set. Asked by the Washington Post’s Nitasha Tiku how it recommended humans solve climate change, it responded with answers commonly discussed in the mainstream media – “public transportation, eating less meat, buying food in bulk, and reusable bags.”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel told the Post.