When Mark Zuckerberg testified in court that companies should focus on building products that are “useful,” he framed the argument as almost axiomatic. Create something genuinely useful, he said, and people will naturally want to use it. Implicit in that statement is the idea that high engagement is a byproduct of value rather than the result of manipulation.
Yet this framing raises a far more difficult question than the courtroom exchange acknowledged: What does “useful” actually mean in the context of social media?
From its earliest days, Facebook was not primarily a tool for productivity, education, or civic engagement. Its early appeal lay in social comparison, visibility, and validation. Long before algorithmic feeds dominated user experience, Facebook and its predecessors allowed users to assess popularity, desirability, and social rank. The platform’s utility was psychological rather than functional, satisfying a deeply human desire to be seen, liked, and affirmed. That kind of “usefulness,” however, is fundamentally different from utility that enhances well-being or capability.
Zuckerberg’s testimony suggests that usefulness can be inferred from use itself. If millions—or billions—of people spend hours on a platform each day, the logic goes, then the platform must be delivering value. But history is filled with counterexamples. Casinos are heavily used, yet their usefulness accrues primarily to the house. Cigarettes were once widely consumed, but widespread use did not equate to social benefit. High engagement alone cannot be the moral yardstick.
The distinction becomes even more important when examining how modern social platforms are designed. Today’s ecosystem includes Instagram, TikTok, and Snapchat, all of which rely on design architectures that encourage prolonged and repeated use. Infinite scrolling, algorithmic personalization, and constant notifications are not neutral features. They are grounded in behavioral science and optimized through machine learning systems that measure, predict, and shape human attention.
READ: Sreedhar Potarazu | Healthcare cracking, lawmakers scrambling, consumers falling: The demise of US healthcare as we know it (January 30, 2026)
Neuroscience offers a sobering lens through which to view this design philosophy. The brain’s reward system, particularly the dopamine system, evolved to reinforce behaviors essential for survival and social bonding. Social media platforms effectively tap into this circuitry by delivering intermittent rewards in the form of likes, comments, and social recognition. The unpredictability of these rewards strengthens compulsive behavior, much like gambling systems that rely on variable reinforcement schedules. Over time, this can create a loop of anticipation and gratification that becomes difficult to disengage from, even when the experience no longer provides genuine satisfaction.
The concern is especially acute for adolescents. During this developmental period, the brain exhibits heightened sensitivity to reward while executive control systems responsible for impulse regulation and long-term planning are still maturing. Studies increasingly show that excessive social media use among teens correlates with higher rates of anxiety, sleep disruption, and depression with evidence suggesting a dose-response relationship between time spent on platforms and mental health outcomes. When a product systematically exploits developmental vulnerabilities, it becomes harder to argue that engagement is simply “natural.”
READ: Sreedhar Potarazu | The Impact of Indian ‘grind culture’ in American schools (February 3, 2026)
Artificial intelligence intensifies this dynamic. Modern recommendation systems analyze enormous volumes of behavioral data to determine what content keeps users engaged the longest. The goal is not enlightenment or enrichment but retention. In economic terms, attention has become the product, and adolescents have become a highly lucrative demographic.
In 2025 alone, children and teenagers in the United States generated billions of dollars in advertising revenue for social media companies. This monetization model creates an inherent conflict between user well-being and corporate incentives.
There is also growing evidence that excessive digital engagement can be associated with structural and functional changes in the brain. Research on internet and gaming addiction has identified alterations in regions responsible for reward processing, impulse control, and emotional regulation.
Against this backdrop, Zuckerberg’s assertion that companies should not engineer their platforms to tempt young people into excessive use feels incomplete. The engineering is already embedded in the system. When engagement metrics drive design decisions and AI systems continuously refine content delivery to maximize time spent, temptation is no longer incidental.
Read more columns by Sreedhar Potarazu
This brings us back to the word “useful.” A navigation app that reduces travel time is useful. A medical technology that improves diagnosis is useful. A tool that enhances learning or problem-solving is useful because it expands human capacity. Social media platforms, by contrast, often derive their value from intensifying emotional responses, amplifying comparison, and sustaining attention through psychological reinforcement rather than substantive benefit.
A more ethically coherent definition of usefulness would consider long-term human flourishing rather than short-term engagement. Under such a standard, a platform’s success would not be measured by hours consumed but by whether it leaves users more informed, more capable, and more psychologically resilient than before. If a platform requires continuous behavioral manipulation to sustain use, then its usefulness may be inseparable from dependency.
The courtroom question facing Zuckerberg is ostensibly about liability, regulation, and responsibility. But beneath it lies a deeper societal question.
If technology companies are allowed to define usefulness solely by usage, then the most addictive products will always appear to be the most valuable. If, however, usefulness is tied to well-being, development, and autonomy, then the business model of social media itself demands reexamination.
The future of technology ethics may hinge not on how much people use these platforms, but on why they do and at what cost.


