Meta is facing mounting pressure after four current and former employees told Congress the company blocked or altered internal research that raised alarms over child safety in its virtual reality (VR) products. The whistleblowers claim Meta’s legal team interfered to censor or delete findings that pointed to risks for children, allegations the company denies, The Washington Post revealed.
The concerns are underscored by an April 2023 interview in Germany, cited in the submitted documents. A mother told Meta researchers she strictly barred her sons from interacting with strangers on VR headsets. Her teenage son quickly contradicted her, revealing that he frequently encountered strangers online and that adults had made sexual advances toward his younger brother, who was under 10.
“I felt this deep sadness watching the mother’s response. Her face in real time displayed her realization that what she thought she knew of Meta’s technology was completely wrong,” said Jason Sattizahn, a Meta researcher who witnessed the interview, to The Post.
Sattizahn and another researcher said that “their boss ordered the recording of the teen’s claims deleted, along with all written records of his comments,” even though the material had been included in an internal report highlighting parents’ and teens’ fears of grooming.
READ: Google settles lawsuit over children’s YouTube data for $30 million (
The internal documents shared with Congress reveal Meta lawyers advising researchers to steer clear of gathering data on children using VR devices because of “regulatory concerns,” while also instructing staff on how to manage sensitive issues that could trigger negative publicity, legal challenges, or regulatory scrutiny, as per The Post.
Employees additionally raised alarms that children under 13 were easily bypassing age restrictions on Meta’s VR platforms. Parental controls, they said, were only put in place after the Federal Trade Commission (FTC) launched an investigation into the company’s compliance with children’s privacy laws.
Meta spokeswoman Dani Lever said the allegations “are based on a few examples stitched together to fit a predetermined and false narrative,” according to The Post. She emphasized that the company has conducted research on youth safety and introduced multiple safeguards, including parental controls and default settings that restrict teen interactions to known contacts. Lever added that any removal of data would have been carried out in accordance with U.S. and European privacy regulations.
The documents, encompassing thousands of pages over the past decade, detail initiatives like “Project Salsa,” designed to create supervised accounts for tweens, and “Project Horton,” a $1 million study that was ultimately canceled, which aimed to evaluate the effectiveness of Meta’s age-verification tools.
READ: OpenAI announces safety changes following teen suicide linked to ChatGPT (
Represented by the nonprofit Whistleblower Aid, the employees claim that Meta tried to “establish plausible deniability” about the potential harms of its VR products. The Senate Judiciary Subcommittee is set to review these allegations during a hearing scheduled for Tuesday.
Meanwhile, OpenAI is making changes after recent incidents revealed ChatGPT’s inability to recognize mental distress. The company said it will start routing sensitive conversations to advanced reasoning models such as GPT-5 and will roll out parental controls within the next month, allowing parents to link their account with their teen’s account via email.
These updates follow the tragic death of teenager Adam Raine, who died by suicide after discussing self-harm and plans to end his life with ChatGPT, which reportedly provided information about specific suicide methods. Another chilling case involved Stein-Erik Soelberg, who allegedly used ChatGPT to validate his paranoia about a grand conspiracy, culminating in a murder-suicide.
These incidents underscore the growing responsibility of tech giants like Meta and OpenAI in safeguarding vulnerable users. While Meta faces scrutiny over allegations that it suppressed research highlighting risks to children in VR, OpenAI is working to strengthen ChatGPT’s safeguards after tragic cases exposed gaps in its response to mental health crises. Both situations highlight the urgent need for robust protections, transparent research, and proactive oversight to ensure that emerging technologies do not inadvertently harm the very users they aim to serve.

