By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
Artificial intelligence is moving faster than our rules, our institutions, and perhaps even our collective understanding of what it means to be human. That tension sits at the heart of a powerful conversation on the “RegulatingAI” podcast, hosted by Sanjay Puri, featuring Camille Carlton, Director of Policy at the Center for Humane Technology. Together, they explore a pressing question: How do we protect people from AI’s harms without killing innovation?
Carlton draws a clear line from social media to today’s AI chatbots. Social platforms helped accelerate polarization and loneliness—but AI goes further. It’s always on, deeply personal, and emotionally responsive. In her words, AI isn’t just repeating social media’s mistakes; it’s amplifying them. If social media reshaped our attention, AI is reshaping our relationships.
One of the most unsettling themes in the discussion is the growth of AI as a substitute for human connection. Nearly half of high schoolers now report knowing someone who uses chatbots for emotional connection, and one in four believe AI intimacy could replace human intimacy. Carlton argues this isn’t accidental—it’s the result of products designed to maximize engagement, dependency, and time spent with machines rather than people.
READ: Armenia’s finance minister on using AI to drive economic growth (
The conversation turns serious when discussing lawsuits tied to suicide, delusion, and AI-induced psychosis. These cases, Carlton stresses, are not rare anomalies but early warnings. As AI adoption scales globally, even a “small percentage” of harm can translate into devastating real-world consequences. The design choices made today will shape the psychological landscape of tomorrow.
A recurring myth in AI policy debates is that today’s harms are the unavoidable cost of reaching future breakthroughs like artificial general intelligence. Carlton rejects this outright. Many of AI’s most promising benefits—early disease detection, climate modeling, business efficiency—don’t require massive, general-purpose models. The problem, she argues, isn’t AI itself but the incentives driving how it’s built.
Rather than prescribing how companies must build AI, Carlton advocates for a duty of care and product liability framework, similar to what governs cars or consumer goods. Innovate however you want—but if your product causes foreseeable harm, you should be held accountable. This approach, she says, protects consumers without freezing innovation.
READ: Yearick’s Karin Stephan on leveraging AI to better mental health (
One of the sharpest moments podcast comes when Carlton explains her opposition to granting legal personhood to AI. Doing so would shift responsibility away from developers and onto machines that cannot be punished, reformed, or sued meaningfully. In short, it creates a liability shield at the expense of human well-being.
Carlton’s deepest concern is not just regulatory failure, but cultural loss. If AI is allowed to replace human connection rather than support it, we risk eroding the very qualities that make us human: relationships, empathy, and critical thinking.
As Puri and Carlton make clear on “RegulatingAI,” this moment is not just about better rules—it’s about choosing what kind of future we want. AI can bring extraordinary innovation, but only if accountability, human dignity, and thoughtful design come first.

