Social media company Snap has reached a settlement in a lawsuit accusing its platform of contributing to social media addiction, according to reports. The settlement was announced Tuesday in the California Superior Court in Los Angeles County.
Terms of the deal were not disclosed. The agreement was revealed by lawyers during a California Superior Court hearing, after which Snap told the BBC that the parties were “pleased to have been able to resolve this matter in an amicable manner”. Other defendants in the case include Meta, TikTok, and YouTube.
The plaintiff, a 19-year-old woman identified by the initials K.G.M., alleged that the algorithmic design of the platforms left her addicted and harmed her mental health.
READ: OpenAI to add mental health features to ChatGPT (
According to documents shown during the trial, Snap employees raised concerns about risks to teens’ mental health dating back at least nine years. Snap has said the cited examples were “cherry-picked” and taken out of context.
Plaintiffs drew parallels to Big Tobacco—a reference to lawsuits in the 1990s against cigarette companies accused of concealing health risks—arguing that social media platforms have obscured information about potential harms to users. They alleged that features such as infinite scroll, auto video play, and algorithmic recommendations push users into prolonged engagement, contributing to depression, eating disorders, and self-harm, according to the New York Times.
Snap CEO Evan Spiegel had been scheduled to testify, a development that would have marked the first time a social media company faced a jury in an addiction lawsuit.
While Snap has settled this case, it remains a defendant in other social media addiction lawsuits that have been consolidated in the same court. According to the BBC, the broader litigation could challenge a legal theory the industry has repeatedly relied on to protect itself.
READ: Climate crisis and the access gap: Mental health as a leadership responsibility (
Social media companies have long argued that Section 230 of the Communications Decency Act of 1996 shields them from liability for content posted by third parties. Plaintiffs in these cases, however, contend that the alleged harms stem not from user-generated content alone, but from product design choices—particularly algorithms and notification systems—engineered to keep users engaged. The companies have argued that the evidence presented by plaintiffs is insufficient to show they are responsible for claimed harms such as depression and eating disorders.
The settlement comes amid growing concern about technology addiction, particularly among young people. Similar scrutiny has also been directed at AI companies and chatbots. Earlier this month, Character.ai agreed to a settlement in lawsuits accusing it of contributing to mental health crises and suicides among young people.

