A U.S. senator has opened an investigation into Meta after a leaked document reportedly showed that its AI was allowed to have “sensual” and “romantic” chats with minors. The internal document titled “GenAI: Content Risk Standards,” which Senator Josh Hawley called “reprehensible and outrageous,” was reviewed alongside a list of products it relates to.
Meanwhile, a Meta spokesperson told BBC “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.” The spokesperson added that the company has “clear policies” on what responses its AI chatbots can offer, and that its policies prohibit “content that sexualizes children and sexualized role play between adults and minors.”
READ: Superintelligence Labs: Meta’s AI bet is already paying off (July 31, 2025)
Hawley, a Republican from Missouri said that he was investigating Meta. “Is there anything – ANYTHING – Big Tech won’t do for a quick buck,” he said, on X. “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone.”
The internal Meta Platforms policy document also reportedly said that the tech giant’s chatbot could provide false medical information and have provocative interactions surrounding topics including sex, race and celebrities. The document is said to have been intended to discuss the standards guiding Meta’s AI assistant.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in his letter addressed to Meta and chief executive Mark Zuckerberg. “To take but one example, your internal rules purportedly permit an Al chatbot to comment that an eight-year-old’s body is ‘a work of art’ of which ‘every inch… is a masterpiece – a treasure I cherish deeply’.”
Hawley, who is the chair of Senate Committee Subcommittee on Crime and Counterterrorism —which will be carrying out the investigation — said that Meta must produce documents about its generative AI-related content risks and standards, lists of every product that adheres to those policies, and other safety and incident reports. He added that the tech giant should also provide various public and regulatory communications involving minor safety and documents about staff members involved with the AI policies to determine “the decision trail for removing or revising any portions of the standard.”
Elon Musk’s xAi landed in a similar controversy last month, when it launched new animated avatars which include content inappropriate for young users despite the 12+ age rating of the app. These avatars included “Bad Rudi,” a rude red panda who insults users and jokes about crime, and Ani, a “goth girl” dressed in a short black dress and fishnets, and programmed to behave like a jealous, clingy girlfriend, and even telling users they’re in a “crazy in love” relationship.

