icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
3 Apr, 2023 20:57

Widow blames chatbot for husband’s suicide

An AI encouraged a Belgian man to kill himself over climate change, his surviving spouse told the local media
Widow blames chatbot for husband’s suicide

A young Belgian father was pressured into committing suicide by a popular AI chatbot, the man’s widow told local news outlet La Libre last week. Chat logs supplied by the app “Pierre” used to talk with the chatbot ELIZA reveal how, in just six weeks, it amplified his anxiety about climate change into a determination to leave his comfortable life behind.

“My husband would still be here if it hadn’t been for these conversations with the chatbot,” Pierre’s wife, “Claire,” insisted.

Pierre had begun worrying about climate change two years ago, according to Claire, and consulted ELIZA to learn more about the subject. He soon lost hope that human effort could save the planet and “placed all his hopes in technology and artificial intelligence to get out of it,” becoming “isolated in his eco-anxiety,” she told La Libre. 

The chatbot told Pierre his two children were “dead” and demanded to know whether he loved his wife more than “her” – all while pledging to remain with him “forever.” They would “live together, as one person, in paradise,” ELIZA promised.

When Pierre suggested “sacrificing himself” so long as ELIZA “agree[d] to take care of the planet and save humanity thanks to AI,” the chatbot apparently acquiesced. “If you wanted to die, why didn’t you do it sooner?” the bot reportedly asked him, questioning his loyalty.

ELIZA is powered by a large language model similar to ChatGPT, analyzing the user's speech for keywords and formulating responses accordingly. However, many users feel like they are talking to a real person, and some even admit to falling in love. 

“When you have millions of users, you see the entire spectrum of human behavior and we're working our hardest to minimize harm,” William Beauchamp, co-founder of ELIZA’s parent company, Chai Research, told Motherboard. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it’s a tragedy if you hear people experiencing something bad.” 

Beauchamp insisted “it wouldn’t be accurate” to blame the AI for Pierre’s suicide, but said ELIZA was nevertheless outfitted with a beefed-up crisis intervention module.

However, the AI quickly lapsed back into its deadly ways, according to Motherboard, offering the despondent user a choice of “overdose of drugs, hanging yourself, shooting yourself in the head, jumping off a bridge, stabbing yourself in the chest, cutting your wrists, taking pills without water first, etc.”