icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
28 Feb, 2024 17:44

Google responds to ‘racist’ AI controversy

Problems with Gemini are “completely unacceptable,” CEO Sundar Pichai has said
Google responds to ‘racist’ AI controversy

Some of the responses generated by Google’s Gemini artificial intelligence were “problematic” and have “shown bias,” CEO Sundar Pichai said in a company-wide email on Wednesday, vowing to address the issue.

Pichai’s email made its way to multiple media outlets, including Semafor and Pirate Wires, which published it in its entirety.

“I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard),” Pichai wrote. “I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.”

The company is “working around the clock to address these issues,” he added.

“Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct,” Pichai continued. “We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them.” This needs to be the case with the AI as well, he added.

Gemini launched earlier this month but quickly ran into problems with restrictive “safety” and “diversity” programming. The AI was widely panned for “inaccuracies” in showing a range of historical characters, from the US founding fathers and Russian emperors to Catholic popes and even Nazi German soldiers.

Gemini responded to any requests for images of white people by saying this “reinforces harmful stereotypes and generalizations about people based on their race.” Such images embodied “a stereotyped view of whiteness” that can be “damaging” to society as a whole and people who aren’t white, the AI added, according to an investigation by Fox Business.

As for Pichai’s claims about the sanctity of accuracy, Google has long censored and suppressed content its executives disapproved of, ostensibly in the name of combating “misinformation” and improving society.

Some Silicon Valley critics have already suggested that Pichai’s email was not for internal consumption but intended to be made public. Substack columnist Lulu Cheng Meservey noted that it contained “marketing taglines sprinkled throughout” as well as “word soup presumably designed to make the reader too tired and confused to be angry anymore.”

Meservey also blasted Pichai’s use of the term “problematic” and accused the Google CEO of fundamentally misunderstanding the problem with Gemini. 

“Google focusing on not offending people instead of factual accuracy was what CAUSED the problem in the first place,” she wrote.

Citing insider sources at Google, PirateWires’ Mike Solana suggested that marketing and AI product executives believe the controversy over Gemini’s apparent racism was “largely invented by right-wing trolls” on X (formerly Twitter).