Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
Artificial intelligence is moving faster than regulation, faster than institutions, and arguably faster than our collective ability to make sense of it. In a recent episode of the “Regulating AI” podcast, hosted by Sanjay Puri, guest Anne Bouverot, one of Europe’s most influential voices on AI governance and cybersecurity offered a rare long-view perspective on where we’ve been, where we are, and what truly matters as AI reshapes society.
Bouverot completed her PhD in artificial intelligence in 1991, during what is now known as the second AI winter. Funding was scarce, expectations were low, and AI’s future looked uncertain. Like many researchers at the time, she chose not to remain in academia, pivoting instead to telecoms and later global technology policy.
Yet, as she reflects, research was never wasted, “I learned a lot of things from that research… what it means to be a researcher, what it means to really work on something for years and try to go really, really deep into a topic, which gave me a lot of self-assurance that I would be able to do that on other topics and understand very complex things.” Careers, she reminds us, are rarely linear. Knowledge has a way of returning when the world catches up.
READ: Leading with heart and disruption: Fashion tech pioneer Ram Sareen’s success story (
Having lived through both AI winters and multiple technology revolutions, Bouverot believes today’s AI moment is real but not unprecedented in its ambition. Similar claims about machines rivaling human intelligence have circulated since the 1950s.
What makes today different is not the idea of AI, but its capabilities and reach. Generative AI already changes how people learn, work, and interact. At the same time, she warns against confusing societal transformation with inevitability. Financial markets may still be in bubble territory, even if the underlying shift is genuine.
Compared to the internet or smartphones, generative AI has spread at extraordinary speed. Public-facing tools reached global adoption in just a few years, leaving little time for institutions, educators, and regulators to adjust.
This acceleration is what makes AI feel overwhelming. Yet, Bouverot notes, past revolutions felt just as destabilizing in their moment. We tend to forget yesterday’s disruptions once they become infrastructure, “And we tend to forget the past transformations, because we all live in the current moment, especially given social media and everything else.”
At the heart of Bouverot’s approach is “technology diplomacy.” AI governance, she argues, is fundamentally human. It requires understanding cultural context, political constraints, and stakeholder incentives alongside deep technical credibility.
READ: Dr. Soumitra Dutta on navigating risk, leadership, and life (
Progress often starts with measurement rather than regulation: agreeing on how to define, assess, and evaluate systems before setting rigid rules. This balance between people and precision is what enables cooperation across borders.
With more than 200 countries and countless stakeholders, universal agreement is unrealistic. Instead of forcing consensus, Bouverot advocates for coalitions of the willing voluntary, open initiatives like those launched at the Paris AI Action Summit.
Not everyone has to sign for progress to happen. Participation itself creates momentum.
AI sovereignty, in Bouverot’s view, is about choice not isolation. The real risk is a future where countries must choose between a U.S.-centric or China-centric digital life.
Building regional AI ecosystems, investing locally, and regulating thoughtfully can preserve cultural identity while maintaining global cooperation.
Drawing on ethical discussions at the Vatican’s Minerva Dialogues, Bouverot emphasizes that dignity of work cannot be reduced to productivity alone. Contribution, recognition, and economic security matter especially as AI reshapes labor.
Her closing reflection is simple but powerful: AI governance isn’t about controlling the future. It’s about building human-centered systems resilient enough to adapt just as people always have.

