icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
24 Apr, 2022 16:59

Google accused of ‘creepy’ speech policing

Google’s attempts to persuade people to use ‘inclusive language’ are flawed and intrusive, activists say
Google accused of ‘creepy’ speech policing

Google’s new ‘inclusive language’ assistant uses artificial intelligence to detect “discriminatory” words, and suggest that users swap them for more politically correct terminology. Free speech and privacy advocates say the feature undermines “freedom of thought.” 

Google announced the tool at the beginning of April, as part of a host of “assistive writing features” for Google Docs users. Some of these AI-powered add-ons suggest more concise and snappy phrases for writers, while others polish up grammar.

However, Google said that with its new ‘inclusive language’ assistant, “Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience.”

Users soon noticed its prompts creeping into their work and posted screenshots of Google’s suggestions on Twitter. The term ‘motherboard’ is flagged as potentially insensitive, as is ‘housewife’, which Google suggests should be replaced with ‘stay-at-home-spouse’.

‘Mankind’ should be replaced with ‘humankind’, ‘policeman’ with ‘police officer’, and ‘landlord’ with ‘property owner’ or ‘proprietor’. Other technical phrases flagged, Vice reported last week, include ‘blacklist/whitelist’ and ‘master/slave’.

Despite highlighting these common terms as potentially offensive, Google’s assistant placed no warnings on a transcript of an interview with former Ku Klux Klan leader David Duke, in which Duke repeatedly used the word ‘n****r’ to describe black people, Vice’s reporters discovered.

The feature, which can be turned off and is currently available only to corporate users of Google’s Workspace software, has alarmed privacy and anti-censorship activists.

“With Google’s new assistive writing tool, the company is not only reading every word you type but telling you what to type,” Silkie Carlo, director of Big Brother Watch, told The Telegraph on Saturday. “This speech-policing is profoundly clumsy, creepy and wrong… Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought.”

Others see the assistant as stifling creative expression. “What if ‘landlord’ is the better choice because it makes more sense, narratively, in a novel?” scholar Lazar Radic asked The Telegraph. “What if ‘house owner’ sounds wooden and fails to invoke the same sense of poignancy? Should all written pieces – including written forms of art, such as novels, lyrics, and poetry – follow the same, boring template?”

Google has required its employees to use this inclusive language in code and documentation since 2020, and has published an exhaustive list of forbidden words. Under Google’s rules, ‘black box testing’ becomes ‘opaque box testing’, ‘man hours’ is replaced with ‘person hours’, and the offhand plural ‘guys’ is strictly forbidden.

According to Google, the AI behind the feature is still evolving, and learning from human input. The end goal, according to a company statement, is to sanitize the English language of “all” bias and discrimination.

“Our technology is always improving, and we don’t yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases,” a Google spokesperson told The Telegraph.

Podcasts
0:00
23:13
0:00
25:0