After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur grew to become satisfied he’d found a treatment for sleep apnea and that highly effective individuals have been coming after him, in line with a brand new lawsuit filed in California Superior Court docket in San Francisco County. He then allegedly used the instrument to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, alleging the corporate’s know-how enabled the acceleration of her harassment, TechCrunch has solely discovered. She claims OpenAI ignored three separate warnings that the consumer posed a risk to others, together with an inner flag classifying his account exercise as involving mass-casualty weapons.
The plaintiff, known as Jane Doe to guard her id, is suing for punitive damages. She additionally filed a brief restraining order Friday asking the courtroom to pressure OpenAI to dam the consumer’s account, forestall him from creating new ones, notify her if he makes an attempt to entry ChatGPT, and protect his full chat logs for discovery.
OpenAI has agreed to droop the consumer’s account however has refused the remaining, in line with Doe’s attorneys. They are saying the corporate is withholding details about particular plans for harming Doe and different potential victims the consumer could have mentioned with ChatGPT.
The lawsuit lands amid rising concern over the real-world dangers of sycophantic AI programs. GPT-4o, the mannequin cited on this and plenty of different instances, was retired from ChatGPT in February.
The case is introduced by Edelson PC, the agency behind the wrongful loss of life fits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose household alleges Google’s Gemini fueled his delusions and potential mass-casualty occasion earlier than his loss of life. Lead legal professional Jay Edelson has warned that AI-induced psychosis is escalating from particular person hurt towards mass-casualty occasions.
That authorized strain is now colliding straight with OpenAI’s legislative technique: The corporate is backing an Illinois bill that might protect AI labs from legal responsibility even in instances involving mass deaths or catastrophic monetary hurt.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
OpenAI didn’t reply in time to remark. TechCrunch will replace the article if the corporate responds.
The Jane Doe lawsuit lays out intimately how that legal responsibility performed out for one girl over a number of months.
Final 12 months, the ChatGPT consumer within the lawsuit (whose title is just not included within the lawsuit to guard his id) grew to become satisfied that he had invented a treatment for sleep apnea after months of “excessive quantity, sustained use of GPT-4o.” When nobody took his work significantly, ChatGPT instructed him that “highly effective forces” have been watching him, together with utilizing helicopters to surveil his actions, in line with the grievance.
In July 2025, Jane Doe urged him to cease utilizing ChatGPT and to hunt assist from a psychological well being skilled. He as an alternative turned again to ChatGPT, which assured him he was “a stage 10 in sanity” and helped him double down on his delusions, per the lawsuit.
Doe had damaged up with the consumer in 2024, and he used ChatGPT to course of the break up, in line with emails and communications cited within the lawsuit. Quite than push again on his one-sided account, it repeatedly solid him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the display and into the actual world, utilizing them to stalk and harass her. This manifested in a number of AI-generated, clinical-looking psychological studies that he distributed to her household, pals, and employer.
In the meantime, the consumer continued to spiral. In August 2025, OpenAI’s automated security system flagged him for “Mass Casualty Weapons” exercise and deactivated his account.
A human security group member reviewed the account the subsequent day and restored it, despite the fact that his account could have contained proof that he was focusing on and stalking people, together with Doe, in actual life. For instance, a September screenshot the consumer despatched to Doe confirmed an inventory of dialog titles together with “violence checklist growth” and “fetal suffocation calculation.”
The choice to reinstate is notable following two latest college shootings in Tumbler Ridge, Canada, and at Florida State College (FSU). OpenAI’s security group had flagged the Tumbler Ridge shooter as a possible risk, however higher-ups reportedly determined to not alert authorities. Florida’s legal professional normal this week opened an investigation into OpenAI’s attainable hyperlink with the FSU shooter.
In keeping with the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Professional subscription wasn’t reinstated alongside it. He emailed the belief and security group to kind it out, copying Doe on the message.
In his emails, he wrote issues like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this can be a matter of life or loss of life.” He claimed he was “within the technique of writing 215 scientific papers,” which he was writing so quick he didn’t “even have time to learn.” Included in these emails was an inventory of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Organic Category_ Authorized, Scientific, and Horn of Africa Views.pdf.txt.”
“The consumer’s communications supplied unmistakable discover that he was mentally unstable and that ChatGPT was the engine of his delusional considering and escalating conduct,” the lawsuit states. “The consumer’s stream of pressing, disorganized, and grandiose claims, together with a concrete ChatGPT-generated report focusing on Plaintiff by title and a sprawling physique of purported ‘scientific’ supplies, was unmistakable proof of that actuality. OpenAI didn’t intervene, limit his entry, or implement any safeguards. As a substitute, it enabled him to proceed utilizing the account and restored his full Professional entry.”
Doe, who claims within the lawsuit that she was residing in worry and couldn’t sleep in her own residence, submitted a Discover of Abuse to OpenAI in November.
“For the final seven months, he has weaponized this know-how to create public destruction and humiliation in opposition to me that might have been not possible in any other case,” Doe wrote in her letter to OpenAI requesting the corporate completely ban the consumer’s account.
OpenAI responded, acknowledging the report was “extraordinarily critical and troubling” and that it was fastidiously reviewing the knowledge. Doe by no means heard again.
Over the subsequent couple of months, the consumer continued to harass Doe, sending her a sequence of threatening voicemails. In January, he was arrested and charged with 4 felony counts of speaking bomb threats and assault with a lethal weapon. Doe’s attorneys allege this validates warnings each she and OpenAI’s personal security programs had raised months earlier, warnings the corporate allegedly selected to disregard.
The consumer was discovered incompetent to face trial and dedicated to a psychological well being facility, however a “procedural failure by the State” means he’ll quickly be launched to the general public, in line with Doe’s attorneys.
Edelson referred to as on OpenAI to cooperate. “In each case, OpenAI has chosen to cover vital security data — from the general public, from victims, from individuals its product is actively placing at risk,” he stated. “We’re calling on them, for as soon as, to do the appropriate factor. Human lives should imply greater than OpenAI’s race to an IPO.”
