icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
22 Aug, 2022 22:49

Google sexual-abuse AI backfires on parents – NYT

The tech giant seized accounts after flagging parents as child abusers – even after police exonerated them
Google sexual-abuse AI backfires on parents – NYT

Google’s campaign against child sexual exploitation on its platforms risks destroying the lives of the children it purports to save – as well as innocent parents – by leaving their reputations, freedom and perhaps their family's survival in the hands of an overzealous algorithm, a New York Times investigation published on Sunday has revealed. 

Two fathers of young children, unknown to each other, were labeled as child molesters for nothing more sinister than trying to get their toddlers medical help.

San Francisco dad Mark and Houston dad Cassio thought nothing of sending over photos of their sons’ swollen genitals on the request of their pediatricians, a practice which has become totally normal in the post-pandemic paradigm. 

Thinking nothing further of it as his son recovered, Mark was rudely awakened by his Google Fi phone informing him that “harmful content” – potentially illegal – had been discovered. Attempting to appeal the decision elicited a rejection and no further communication from Google.

Mark lost access to his phone, email, contacts, and every Google product. All of his data, including photos and videos, was locked in the Cloud.

During its scan of Mark's content Google found another video that set off alarms, using an AI tool that claims to recognize never-before-seen images of child exploitation based on their similarity to existing images.

Rather than bringing in a human moderator to verify the photo, Google’s process is to lock down the account, scan every other image they have, and call the National Center for Missing and Exploited Children, which prioritizes potential new victims. The incorrectly-flagged images are added to the database of exploited children, so even more innocuous images like Mark’s and Cassio’s risk setting off red flags. 

San Francisco police opened an investigation after Google flagged Mark’s video and secured a copy of everything in his Google accounts, including his search and location history, messages and documents sent and received, photos and videos stored on the cloud. Law enforcement determined no crime had been committed and closed the case.

Google was not so understanding and mark remains without Google services. Even when Mark asked the lead detective on his case to intercede on his behalf, the officer said there was nothing he could do. 

Cassio’s case unfolded almost the exact same way, with Houston police dropping their case once he produced communications from the pediatrician. However Google will not give back his data.

Despite the police exonerating the parents, Google stood by its decision to flag the parents as child molesters and block all their data. “Child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms,” the company said in a statement, according to the New York Times.

It is not known how many Child Protective Services cases have been opened on the basis of such “mistakes,” nor how many interventions have been sparked by incorrect AI decisions, as even those wrongfully-accused of child abuse stay silent.

Podcasts
0:00
28:20
0:00
27:33