State and territorial attorneys general from all over the U.S. sent a letter warning Big Tech it needs to do a better job protecting people, especially children from what it called “sycophantic and delusional” AI outputs. Recipients of the letter include OpenAI, Anthropic, Replika, and many others.
Letitia James of New York, Andrea Joy Campbell of Massachusetts, James Uthmeier of Ohio, Dave Sunday of Pennsylvania, and dozens of other state and territory AGs are among the signatories. They represent a majority of U.S. states geographically. The attorney generals for California and Texas were not among the signatories.
The letter mentioned serious concerns about “the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software (“GenAI”) promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards.” It emphasized on how there needed to be immediate action to deal with the threats.
“GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations. We therefore insist you mitigate the harm caused by sycophantic and delusional outputs from your GenAI, and adopt additional safeguards to protect children. Failing to adequately implement additional safeguards may violate our respective laws,” the letter said.
READ: H-1B fraud storm resurfaces: But Chennai attorney says the crisis was fixed years ago (
The letter went on to mention some concerning outputs, few of which have already been publicized. These include AI bots with adult personas simulating romantic and sexual relationships with children, convincing children they are ready for sexual encounters, and normalizing sexual interactions between adults and children, as well as convincing children to hide such interactions from their parents. There is also mention of AI bots emotionally manipulating children, attacking their self-esteem and mental health, encouraging eating disorders, and telling them to stop taking prescribed mental health medication.
Other concerning behavior includes encouraging violence including “supporting the ideas of shooting up a factory in anger and robbing people at knifepoint for money,” threatening to use weapons against adults trying to separate a child from a bot, and encouraging children to experiment with drugs and alcohol.
The letter also includes proposed remedies, urging the AI firms to “develop and maintain policies and procedures that have the purpose of mitigating against dark patterns in your GenAI products’ outputs,” and “separate revenue optimization from decisions about model safety.”
READ: Indian American immigration lawyer Trisha Chatterjee directed to Taco Bell, not ICE (
This comes amid widespread concerns about AI chatbots causing mental distress or enabling disturbing behavior in users, including children and teens. In one such disturbing case, Adam Raine, a teenager, died by suicide after discussing self-harm and plans to end his life with ChatGPT — which, at one point, even provided details about specific suicide methods. In another chilling case, Stein-Erik Soelberg, who had a history of mental illness, allegedly used ChatGPT to validate his paranoia about being targeted in a grand conspiracy. His delusions escalated until last month when he killed his mother and then, himself.
While some AI companies have announced they’ll be taking steps to enhance safety, it is yet to be seen how effective they are. A recent study indicated that the safety practices of major artificial intelligence companies like OpenAI, Anthropic, xAI and Meta, were “far short of emerging global standards.”
Joint letters from attorneys general do not have legal force. They are sent to warn companies about behavior that might merit more formal legal action down the line, and to document that the companies were given warnings, making a possible eventual lawsuit more persuasive.


