OpenAI and its business partner Microsoft are facing a lawsuit over allegations that the AI company’s chatbot, ChatGPT validated a user’s paranoid delusions, leading him to kill his mother before committing suicide.
Stein-Erik Soelberg, 56, a former tech industry worker, had fatally beaten and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut. Adams’s death was ruled homicide “caused by blunt injury of head, and the neck was compressed” and Soelberg’s death was classified as suicide with sharp force injuries of neck and chest, the Greenwich Free-Press reported.
The lawsuit was filed by Adams’ estate on Thursday in California Superior Court in San Francisco. It alleges that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.
READ: OpenAI to acquire Neptune, maker of AI monitoring and debugging tools (
“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life – except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”
OpenAI did not discuss the merits of the allegation in the statement it issued in response. “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.” The company also said it expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.
Soelberg’s YouTube profiles included hours of videos of him scrolling through conversations with ChatGPT which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”
READ: OpenAI’s data center partners in debt to the tune of $100 billion (
ChatGPT also affirmed Soelberg’s beliefs that the printer in his home is a surveillance device, that his mother was monitoring him, and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. The chatbot also told Soelberg he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.
The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” 20 unnamed OpenAI employees and investors are also named as defendants.
In a previous incident, OpenAI and Altman were sued by the parents of California teenager Adam Raine, who claimed that that the 16-year-old’s use of the chatbot contributed to his isolation and played a role in his death by suicide in April.

