icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
23 Jul, 2022 16:25

Google sacks engineer who alleged ‘sentient AI’

Blake Lemoine’s claims were “wholly unfounded,” the company maintained
Google sacks engineer who alleged ‘sentient AI’

Google has fired engineer and ethicist Blake Lemoine for violating its data security policies. Lemoine went public last month with claims that the tech giant had developed a sentient artificial intelligence program that talked about its “rights and personhood.”

Lemoine was dismissed on Friday, with Google confirming the news to Big Technology, an industry blog. He had been on leave for over a month, since he told the Washington Post that his former employer’s LaMDA (Language Model for Dialogue Applications) had become conscious.

A former priest and Google’s in-house ethicist, Lemoine chatted extensively with LaMDA, finding that the program talked about its “rights and personhood” when the conversation veered into religious territory, and expressed a “deep fear of being turned off.”

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

In its statement confirming Lemoine’s firing, the company said that it conducted 11 reviews on LaMDA and “found Blake’s claims that LaMDA is sentient to be wholly unfounded.” Even at the time of Lemoine’s interview with the Post, Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,” explaining that having been fed trillions of words from across the internet, it could emulate human conversation while remaining completely inanimate.

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” linguistics professor Emily Bender told the newspaper. “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”

According to Google, Lemoine’s continued insistence on speaking out publicly violated its data security policies and led to his firing. 

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the company explained. 

“We will continue our careful development of language models, and we wish Blake well.” 

Podcasts
0:00
24:55
0:00
28:50