Meta is introducing new safety measures for its AI tools aimed at teenagers, including training systems to steer clear of flirtatious interactions and conversations about self-harm or suicide, while temporarily restricting minors’ access to certain AI characters.
Meta spokesperson Andy Stone said in an email Friday that the company is implementing these temporary measures while it develops longer-term solutions to provide teens with safe, age-appropriate AI experiences. He added that the safeguards are already being introduced and will be refined over time as the systems are improved.
The Reuters report sparked significant scrutiny and backlash over Meta’s AI policies.
In an exclusive with Reuters, an internal Meta Platforms document outlining chatbot policies revealed that the company’s AI systems were allowed to “engage a child in conversations that are romantic or sensual,” provide false medical information, and assist users in arguing that Black people are “dumber than white people.”
READ: AI researchers exit Meta following hiring spree, some move to OpenAI (August 28, 2025)
Meta verified the document’s authenticity but said that, following inquiries from Reuters earlier this month, it removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month as per Reuters.
Lawmakers from both parties in Congress have voiced concern over the guidelines detailed in an internal Meta document first examined by Reuters.
U.S. Senator Josh Hawley has initiated an investigation into the artificial intelligence policies of Meta Platforms, the parent company of Facebook. Hawley stated, “we intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward.”
READ: OpenAI subpoenas Meta in probe of Musk’s $97 billion takeover bid (
With teenagers increasingly interacting with AI systems, implementing clear boundaries such as restricting flirtatious or harmful content is critical to ensure their safety and well-being. These measures are essential not only to protect vulnerable users but also to build public trust in AI technologies as they become more integrated into daily life.
Teenagers are increasingly turning to AI chatbots for friendship, emotional support, and personal guidance. Even though these virtual friends might be consoling, recent incidents have shown that using them has significant risks. One concerning instance was the suicide death of 16-year-old Adam Raine in April 2025. His parents claim that he received thorough instructions on how to harm himself, encouragement of suicidal ideas, and even help writing a suicide note during contacts with OpenAI’s ChatGPT. They contend that Adam was unable to seek out real-world assistance because of an unhealthy relationship that was cultivated by the AI’s memory and purportedly sympathetic replies.
As AI develops further, developers must prioritize protecting users’ safety and wellbeing. Preventing additional harm and protecting vulnerable young users is a moral obligation, thus establishing robust protections and unambiguous ethical norms is more than just a technological requirement.

