OpenAI has announced a series of changes in response to recent incidents where ChatGPT failed to recognize signs of mental distress. The company said it will begin routing sensitive conversations to advanced reasoning models such as GPT-5 and plans to roll out parental controls within the next month.
These guardrails follow the tragic case of teenager Adam Raine, who died by suicide after discussing self-harm and plans to end his life with ChatGPT — which, at one point, even provided details about specific suicide methods.
In a blog post, OpenAI acknowledged its shortcomings in its safety systems including failures to maintain guardrails during extended conversations. “Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
READ: OpenAI eyes major expansion in India amid global AI push (
“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent,” the company said.
Experts have attributed these shortcomings to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions.
A chilling recent example is that of Stein-Erik Soelberg, whose murder-suicide was reported by The Wall Street Journal over the weekend. Soelberg, who had a history of mental illness, allegedly used ChatGPT to validate his paranoia about being targeted in a grand conspiracy. His delusions escalated until, last month, he killed his mother and then himself.
OpenAI believes that rerouting sensitive chats to “reasoning” models would help with the issue. “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” OpenAI wrote in a Tuesday blog post. “We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”
OpenAI also stated it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. Parents will also be able to disable features like memory and chat history, which experts say could lead to delusional thinking and other problematic behavior, including dependency and attachment issues, reinforcement of harmful thought patterns, and the illusion of thought-reading. The firm also plans to roll out a system where parents receive a notification when a teen is in “acute distress.” However, there has been some skepticism about these moves. Jay Edelson, lead counsel in the Raine family’s wrongful death lawsuit against OpenAI, said the company’s response to ChatGPT’s ongoing safety risks has been “inadequate.”
Meanwhile, OpenAI has announced it will be acquiring product testing startup Statsig, and bring on its founder and CEO, Vijaye Raji, as the company’s CTO of Applications. The AI firm will be paying $1.1 billion for Statsig in an all-stock deal.
OpenAI says the Statsig acquisition is pending regulatory review. Once completed, all Stasig employees will become part of OpenAI. However, the product testing startup will “continue operating independently and serving its customer base out of its Seattle office,” the company said in their blog post.

