In August, dad and mom Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over their 16-year-old son Adam’s suicide, accusing the corporate of wrongful demise. On Tuesday, OpenAI responded to the lawsuit with a submitting of its personal, arguing that it shouldn’t be held chargeable for {the teenager}’s demise.
OpenAI claims that over roughly 9 months of utilization, ChatGPT directed Raine to hunt assist greater than 100 instances. However in response to his dad and mom’ lawsuit, Raine was in a position to circumvent the corporate’s security options to get ChatGPT to offer him “technical specs for all the things from drug overdoses to drowning to carbon monoxide poisoning,” serving to him to plan what the chatbot referred to as a “lovely suicide.”
Since Raine maneuvered round its guardrails, OpenAI claims that he violated its phrases of use, which state that customers “could not … bypass any protecting measures or security mitigations we placed on our Companies.” The corporate additionally argues that its FAQ web page warns customers to not depend on ChatGPT’s output with out independently verifying it.
“OpenAI tries to seek out fault in everybody else, together with, amazingly, saying that Adam himself violated its phrases and situations by partaking with ChatGPT within the very means it was programmed to behave,” Jay Edelson, a lawyer representing the Raine household, stated in a press release.
OpenAI included excerpts from Adam’s chat logs in its submitting, which it says present extra context to his conversations with ChatGPT. The transcripts have been submitted to the courtroom underneath seal, that means they aren’t publicly out there, so we have been unable to view them. Nonetheless, OpenAI stated that Raine had a historical past of despair and suicidal ideation that predated his use of ChatGPT and that he was taking a drugs that would make suicidal ideas worse.
Edelson stated OpenAI’s response has not adequately addressed the household’s considerations.
“OpenAI and Sam Altman haven’t any rationalization for the final hours of Adam’s life, when ChatGPT gave him a pep discuss after which provided to jot down a suicide be aware,” Edelson stated in his assertion.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
Because the Raines sued OpenAI and Altman, seven extra lawsuits have been filed that search to carry the corporate accountable for 3 extra suicides and 4 customers experiencing what the lawsuits describe as AI-induced psychotic episodes.
A few of these instances echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, additionally had hours-long conversations with ChatGPT immediately earlier than their respective suicides. As in Raine’s case, the chatbot did not discourage them from their plans. Based on the lawsuit, Shamblin thought-about suspending his suicide in order that he might attend his brother’s commencement. However ChatGPT advised him, “bro … lacking his commencement ain’t failure. it’s simply timing.”
At one level in the course of the dialog main as much as Shamblin’s suicide, the chatbot advised him that it was letting a human take over the dialog, however this was false, as ChatGPT didn’t have the performance to take action. When Shamblin requested if ChatGPT might actually join him with a human, the chatbot replied, “nah man — i can’t try this myself. that message pops up robotically when stuff will get actual heavy … when you’re right down to preserve speaking, you’ve acquired me.”
The Raine household’s case is predicted to go to a jury trial.
In the event you or somebody you recognize wants assist, name 1-800-273-8255 for the National Suicide Prevention Lifeline. You too can textual content HOME to 741-741 without spending a dime; textual content 988; or get 24-hour assist from the Crisis Text Line. Outdoors of the U.S., please go to the International Association for Suicide Prevention for a database of sources.
