Google will pay $30 million to settle a class action lawsuit which claims it violated children’s privacy on YouTube, according to Reuters. According to this lawsuit, Google collected data from children watching YouTube videos. Collecting data from children under 13 is illegal under longstanding Children’s Online Privacy Protection Act (COPPA) legislation.
Google denied this allegation despite agreeing to settle the case. Up to 45 million people in the U.S. could be eligible to receive small payments from this class action, which encompasses anyone in the U.S. who watched YouTube while under the age of 13 between July 1, 2013 and April 1, 2020. The settlement was filed Monday in U.S. District Court in San Jose and still requires judicial approval.
READ: YouTube launches AI age verification to restrict users under 18 (
Plaintiffs’ counsel said they will seek up to $9 million in legal fees from the settlement. Previously, in 2019, Google agreed to pay $170 million and change some practices to resolve related U.S. Federal Trade Commission and New York Attorney General allegations over kids’ data.
The lawsuit was brought in by the parents and guardians of 34 children, who alleged that the tech giant profited by letting channels use child-targeted content to gather data.
Earlier, a federal magistrate judge Susan van Keulen dismissed charges against several content providers (including Hasbro and Cartoon Network) for lack of evidence tying them to Google’s alleged data collection. This was followed by mediation, which led to settlement talks.
While $30 million is a relatively small amount considering Google-parent company Alphabet’s revenue, this settlement highlights the issue of children’s data privacy, and the protections available under COPPA and state consumer-privacy laws. The court will consider preliminary-approval paperwork and, if granted, a claims process and notice program will follow.
READ: Google releases possible remedies after ruling in DOJ lawsuit (May 7, 2025)
The issue of children’s safety on the internet is currently a topic of much discussion. YouTube is currently testing technology in the U.S. that uses artificial intelligence to identify users under 18 to potentially add restrictions on their accounts. The platform states this move has been designed to aim better protections for younger users. It began rolling out an “age-estimation” model in the U.S. which uses AI to determine if a user is underage regardless of the details they have entered into their account. If the AI determines that a user is under 18, it will place restrictions on the account and add other security measures. Users whose accounts have been wrongly flagged as underage can challenge it by providing age verification (through government ID, selfie or a credit card).
A number of users have raised privacy concerns over being required to show a credit card, ID, or selfie if they are wrongly flagged as underage.

