AI could replace doctors in life-and-death decisions – ‘suicide pod’ inventor

22 Jan, 2026 14:10 / Updated 42 minutes ago
An Australian euthanasia campaigner has reportedly proposed an AI system to evaluate people’s capacity for assisted dying

The inventor of the controversial Sarco suicide pod, Philip Nitschke, said artificial intelligence could one day replace psychiatrists in assessing whether people seeking assisted dying are mentally capable of making the decision, Euronews reported on Thursday.  

The Sarco, short for sarcophagus, is a 3D-printed capsule designed for one person to enter, lie down, and press a button. The device rapidly reduces oxygen levels and fills the capsule with nitrogen, causing death by hypoxia.  

Nitschke, an Australian euthanasia campaigner and the pod’s creator, said AI could determine who has the “mental capacity” to end their own life. He told the outlet that doctors should not be “running around giving you permission or not to die” and that the choice should rest with those “of sound mind.”  

In countries where assisted dying is permitted, psychiatrists typically assess whether a person is mentally capable, though the practice is limited and highly debated. Nitschke said the process is often inconsistent.   

“I’ve seen plenty of cases where the same patient, seeing three different psychiatrists, gets four different answers,” he said.  

He has proposed an AI system using a conversational avatar to evaluate capacity. Users would “sit there and talk about the issues” the avatar raises, after which it would decide whether they are capable of proceeding. If the AI determines a person is of sound mind, the Sarco pod would be activated, giving a 24-hour window to go ahead, after which the assessment must be repeated. Early versions of the software are operational, Nitschke said, though they have not been independently validated.  

The Sarco pod’s first and only use in Switzerland in September 2024 sparked international outrage. Swiss authorities arrested several people, including the CEO of the assisted dying group The Last Resort, and said the device violated Swiss law, which allows assisted suicide only under strict conditions.  

Nitschke’s proposal has reignited debate over the role of AI in life-and-death decisions. Last year, OpenAI updated ChatGPT after an internal review found over a million users had disclosed suicidal thoughts to the chatbot. Psychiatrists have raised concerns about prolonged AI interactions contributing to delusions and paranoia, a phenomenon sometimes called “AI psychosis.”