The jury just confirmed what many of us had been warning about for more than a decade: social media is not just shaping our children’s behavior—it is harming them. In 2015, long before congressional hearings, whistleblower disclosures, or courtroom battles, I argued that social media platforms were not neutral tools but carefully engineered environments designed to maximize attention, often at the expense of well-being.
At the time, that argument was easy to dismiss as speculative or alarmist. Today, it is much harder to ignore. Two separate jury verdicts—one in New Mexico against Meta Platforms and another in California against both Meta and YouTube—have now transformed that early warning into legal reality.
In New Mexico, jurors concluded that Meta knowingly harmed children’s mental health, engaged in misleading practices, and exploited the vulnerabilities of young users in ways that violated state law. They found that the company prioritized engagement and profit over safety and failed to fully disclose known risks, ultimately imposing hundreds of millions of dollars in penalties. What is most striking is not just the financial consequence, but the language of the verdict itself: “unconscionable” practices targeting children. That is not the language of a regulatory disagreement—it is the language of accountability.
READ: Sreedhar Potarazu | Artificial intelligence or artificial temptation? Risks of training AI on human instincts (March 17, 2026)
At the same time, a separate jury in California reached a similarly consequential conclusion in a case involving both Meta and YouTube, finding that the design of their platforms contributed directly to a young user’s mental health harms. The companies were deemed negligent in how their products were engineered, with the jury awarding millions in damages after determining that features like algorithmic amplification and compulsive engagement loops played a causal role. This second verdict is even more important in some ways because it moves beyond consumer protection into product liability. It reframes social media platforms not as passive conduits of content, but as products whose design can cause harm.
This is precisely the argument that many of us began making years ago. The danger was never just what appeared on these platforms—it was how the platforms themselves were built. Infinite scroll removes natural stopping cues. Algorithmic feeds learn and exploit individual psychological vulnerabilities. Intermittent rewards—likes, shares, notifications—mirror the same behavioral conditioning mechanisms used in gambling. These features are not accidental; they are the result of deliberate design choices optimized for engagement. When applied to adults, they are powerful. When applied to children, they are something else entirely.
READ: Sreedhar Potarazu | Quantum vs AI: The urgent gap to be filled (March 24, 2026)
What the courts are now recognizing is what behavioral science has long understood: children are uniquely susceptible to these systems. Their brains are still developing, particularly in areas governing impulse control and reward processing. The result is a pattern of use that increasingly resembles addiction—not in the traditional chemical sense, but in behavioral terms characterized by compulsion, loss of control, and continued use despite harm. Evidence presented in these cases suggested that companies were aware of these patterns internally, even as public messaging downplayed or reframed the risks.
The consequences are now visible everywhere. Parents see it in the erosion of attention and sleep. Educators see it in classrooms where sustained focus has become harder to maintain. Clinicians see rising rates of anxiety, depression, and self-harm linked to excessive and unhealthy digital engagement. What was once anecdotal has become systemic. And what was once debated is now being litigated—and decided.
Perhaps the most significant implication of these verdicts is the shift in legal strategy. For years, technology companies have relied on Section 230 of the Communications Decency Act to shield themselves from liability for user-generated content. But these cases deliberately sidestepped that protection by focusing on product design rather than content. The argument is simple but powerful: if a company designs a system that predictably harms children, it cannot hide behind neutrality. The jury decisions suggest that this argument is beginning to resonate.
READ: Sreedhar Potarazu | Happy Ugadi, Usha Vance (March 19, 2026)
This moment should be seen as a turning point, not an endpoint. Transparency can no longer be optional; internal research on mental health impacts must be disclosed, especially when it involves minors. Design choices that promote compulsive use in children should face the same scrutiny we apply to other products that affect health and safety. And parents and educators must be given tools that go beyond warnings, enabling meaningful control in an environment that has become deeply embedded in everyday life.
What is most sobering is not that these verdicts happened—it is how long it took. The concerns raised more than a decade ago were grounded in basic principles of human behavior, attention economics, and developmental psychology. They did not require hindsight, only the willingness to question incentives and outcomes. What these juries have now affirmed is that those early warnings were not only correct—they may have understated the scale of the problem.
We are no longer asking whether social media affects our children. That question has been answered—in research, in lived experience, and now in courtrooms. The real question is whether we will act with the urgency that this moment demands, or whether we will once again wait for the next verdict to tell us what we already know.


