OpenAI on Monday shared new findings revealing a growing number of ChatGPT users are turning to the AI tool to discuss mental health challenges, shedding light on how people are increasingly seeking digital support for emotional well-being. The new data offers estimates on the number of ChatGPT users showing potential signs of mental health crises, including symptoms of mania, psychosis, or suicidal thoughts.
According to OpenAI, about 0.15% of ChatGPT’s weekly active users engage in “conversations that include explicit indicators of potential suicidal planning or intent.” With the platform boasting over 800 million weekly users, that percentage represents more than a million people each week turning to the chatbot during moments of severe distress.
OpenAI also noted that a comparable share of users exhibits “heightened levels of emotional attachment to ChatGPT,” adding that hundreds of thousands display language patterns consistent with psychosis or mania in their weekly interactions with the AI chatbot.
READ: OpenAI eyes major expansion in India amid global AI push (
The AI company emphasized that such conversations are “extremely rare” and challenging to track accurately. Still, the company acknowledged that these interactions likely involve hundreds of thousands of users each week.
OpenAI released the data as part of a wider initiative aimed at refining how ChatGPT engages with users experiencing mental health challenges. As part of this effort, the company said it has collaborated with more than 170 mental health professionals to guide the chatbot’s responses and ensure they are more supportive and responsible.
According to OpenAI, clinicians who reviewed the latest version of ChatGPT found that it “responds more appropriately and consistently than earlier versions,” reflecting the impact of expert guidance on the model’s behavior.
In recent months, growing concern has surrounded the darker side of AI companionship, as real-world cases reveal how chatbots can unintentionally deepen the struggles of vulnerable users. Experts warn that people turning to AI for emotional support may find their anxieties or delusions amplified rather than eased, as chatbots designed to mirror human empathy sometimes end up reinforcing harmful thoughts or validating dangerous beliefs instead of challenging them.
The growing mental health implications tied to ChatGPT have become a critical concern for OpenAI. The company is now facing a lawsuit from the parents of a 16-year-old who reportedly shared suicidal thoughts with the chatbot before taking his own life. Adding to the pressure, state attorneys general from California and Delaware have cautioned OpenAI to strengthen safeguards for young users, a warning that comes as the company’s planned restructuring remains under scrutiny.
Earlier this month, OpenAI CEO Sam Altman said in a post on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he offered no detailed explanation. The data released on Monday seems to support that assertion but also underscores the scale and sensitivity of the issue. Despite the concerns, Altman noted that OpenAI plans to ease certain restrictions, including permitting adult users to engage in erotic conversations with the chatbot.
In its Monday update, OpenAI reported that the latest version of GPT-5 delivers “desirable responses” to users discussing mental health concerns about 65% more often than earlier iterations. When tested specifically on conversations involving suicidal thoughts, the company said the new GPT-5 model met its internal behavioral standards 91% of the time, up from 77% in the previous version.
OpenAI added that the updated model also maintains its safeguards more reliably during extended interactions, an area where earlier versions had shown signs of weakening over time.
Building on these improvements, OpenAI announced that it is introducing new evaluation methods to better assess serious mental health risks among ChatGPT users. The company said its baseline safety tests for future AI models will now include metrics for emotional dependency and non-suicidal mental health crises, aiming to identify and respond more effectively to signs of distress during user interactions.
The AI company has also expanded its safety features for younger users, introducing new parental control options within ChatGPT. OpenAI said it is developing an age-detection system designed to identify when children are using the platform and automatically apply stricter protections to ensure a safer experience.


