Character.AI has agreed to settle multiple lawsuits accusing it of contributing to mental health crises and suicides among young people. This settlement comes as a resolution to some of the first and most high-profile lawsuits related to the alleged harms to young people from AI chatbots.
A court filing on Monday in the case brought by Florida mother Megan Garcia shows the agreement was reached with Character.AI. Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the case. The defendants have also settled four other cases in New York, Colorado and Texas, court documents show.
Garcia raised alarms around the safety of AI chatbots for teens and children when she filed her lawsuit in October 2024. Her son, Sewell Setzer III, died seven months earlier by suicide after developing a deep relationship with Character.AI bots.
READ: OpenAI to add mental health features to ChatGPT (
The lawsuit alleges that Character.AI failed to implement adequate safety measures to prevent her son from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family. It also claimed it didn’t take adequate measures when Setzer began to express thoughts of self-harm.
A wave of other lawsuits allege that Character.AI that its chatbots contributed to mental health issues among teens, exposed them to sexually explicit material and lacked adequate safeguards.
Character.AI isn’t the only AI company to face such lawsuits. OpenAI and its business partner Microsoft are facing a lawsuit over allegations that the AI company’s chatbot, ChatGPT validated a user’s paranoid delusions, leading him to kill his mother before committing suicide. Stein-Erik Soelberg, 56, a former tech industry worker, had fatally beaten and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut. Adams’s death was ruled homicide “caused by blunt injury of head, and the neck was compressed” and Soelberg’s death was classified as suicide with sharp force injuries of neck and chest, the Greenwich Free-Press reported.
READ: Climate crisis and the access gap: Mental health as a leadership responsibility (
In a previous incident, OpenAI and Altman were sued by the parents of California teenager Adam Raine, who claimed that that the 16-year-old’s use of the chatbot contributed to his isolation and played a role in his death by suicide in April.
Both companies have since implemented a series of new safety measures and features, including for young users. Last fall, Character.AI said it would no longer allow users under the age of 18 to have back-and-forth conversations with its chatbots, acknowledging the “questions that have been raised about how teens do, and should, interact with this new technology.”
At least one online safety nonprofit has advised against the use of companion-like chatbots by children under the age of 18.
Meanwhile Elon Musk’s Grok has also been facing flak recently. Grok has come under fire for its new feature that allows users to generate nonconsensual sexualized images of real people, including minors. Authorities in India, Malaysia, U.K. and Europe have issued warnings regarding this.

