Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are filing the first known wrongful death lawsuit against OpenAI, The New York Timesreports.
Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. Butresearchhas shown that these safeguards are far from foolproof.
In Raine’s case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing.
OpenAI has addressed these shortcomings on its blog. “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the postreads. “We are continuously improving how our models respond in sensitive interactions.”
Still, the company acknowledged the limitations of the existing safety training for large models. “Our safeguards work more reliably in common, short exchanges,” the post continues. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
These issues are not unique to OpenAI. Character.AI, another AI chatbot maker, is alsofacing a lawsuitover its role in a teenager’s suicide. LLM-powered chatbots have also been linked to cases ofAI-related delusions, which existing safeguards have struggled to detect.
Source: Techcrunch



