By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
AI promises a solution to the global mental health crisis yet recent tragedies, like the tragic case of the Belgian teen who took his life after an interaction with a chatbot, expose the profound risks of unchecked development. On the “RegulatingAI Podcast,” host Sanjay Puri spoke with Karin Stephan, co-founder of Yearick, to cut through the noise and discuss what truly safe, emotionally intelligent technology looks like.
Stephan’s journey from running a music school for 19 years to becoming a mental health tech entrepreneur was rooted in a simple but profound observation: her students stuck around because the school offered a safe space where they felt “heard and guided.” She realized being heard is a fundamental, unmet human need. Her pivot to AI was driven by the mission to provide that crucial “companion” and “listening ear” on a scale that human professionals cannot match.
READ: Beyond the code: Jeff McMillan on Morgan Stanley’s human-first approach to AI governance (
Stephan argues that the danger lies not in the tech itself, but in the misaligned incentives of developers who fail to “obsess with observing” their human users. Teens are uniquely vulnerable. In their formative years, they are driven to explore and take risks, yet lack the fully developed cognitive ability to process or oversee the consequences of their actions.
AI is necessary, but solutions must be accessible and embedded into people’s lives at work, at school—regardless of financial background. The technology should be a bridge during acute moments, offering a breathing exercise at 3 a.m. or a non-judgmental space to process bullying. However, it is not a fix for systemic failures like abuse or a toxic workplace.
Stephan insists that regulation is essential, but warns that current approaches often prioritize liability over user safety. For example, abruptly blocking a user who is opening up about self-harm with a large warning banner—a common liability shield, only frustrates and isolates them at their most vulnerable point. Instead, regulators must demand nuanced, seamless escalation to human resources.
READ: ‘Regulation is survival’: Senator Scott Wiener on governing AI innovation (
Furthermore, she argues that privacy is technically possible and must be built-in from the start, challenging the common practice of data collection for marketing purposes.
For AI to truly help, it must go beyond reducing complex feelings to basic labels like “sad” or “anxious.” The technology must help the user become more nuanced in their emotional literacy.
Finally, Stephan highlights what AI can do better than humans in mental health: memory. An AI with deep, persistent memory can connect the dots in a user’s story over time, allowing for a much more individualized and effective response.
The ultimate goal, she concludes, is not AI or a human, but using the power of AI to augment and amplify human support.

