By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
On a recent episode of “Regulating AI” podcast, host Sanjay Puri sits down with one of the central architects of global AI governance, Brando Benifei, the Italian member of the European Parliament and co-rapporteur of the EU AI Act. Calm, measured, and thoughtful, Benifei speaks with the quiet certainty of someone who has spent years navigating political negotiation and technological upheaval.
A member of Parliament for over a decade, Benifei explains how his earlier work in digital policy naturally evolved into overseeing the AI Act. “I’ve been already there for around 11 years… having been negotiating other legislation and working on issues related to the digital world,” he recalls.
When the EU decided it was time to regulate artificial intelligence, Benifei stepped in. By his second term, he found himself at the helm of an unprecedented regulatory project; one that required studying AI deployments worldwide and negotiating with 27 member states.
Widely regarded as the world’s first comprehensive AI regulation, the Act marks a turning point in how democratic institutions respond to rapidly evolving technologies. The AI Act, adopted in March 2024, is designed to balance innovation with citizen rights.
The European Parliament adopted this landmark legislation with Benifei declaring that they finally have the world’s first binding law on artificial intelligence to reduce risks, create opportunities, combat discrimination and bring transparency.
For Benifei, the first major hurdle is public readiness. “I think that all over Europe, we still lack that kind of focus from the institutions. And the private sector can do as much, but we need more institutional commitment on that.” Some EU countries like France and the Baltic states are ahead, while others remain slow to adopt national training strategies. Without widespread awareness, he warns, innovation risks being uneven and misunderstood.
On implementation, Benifei points to the first major test: enforcing prohibited use cases. “The emotional recognition in workplaces and in study places, the unlimited use of biometric cameras and subliminal manipulative techniques through AI… one of the first challenges for the start of the implementation.” Equally important is building a competent enforcement structure. In Europe’s new AI Office, “the AI Act implementation in the member states are properly installed and are given the right means to act,” he stressed.
READ: ‘Data governance is having a comeback,’ says Collibra CTO on the future of enterprise AI (
The conversation also tackles a recurring global question: does regulation slow innovation? Benifei argues the opposite. “Do we really want doctors to use AI systems that are not tested to put forward diagnosis and therapies for patients?” He stresses that trust is the foundation of adoption; without trust citizens would not accept AI in healthcare, hiring, or creative industries.
He adds, “not every innovation is good. We do not think it’s good to have systems that can manipulate people in a subliminal way.”
Addressing concerns that the AI Act could disadvantage European companies, Benifei is clear: “the AI Act applies to all the AI developers, AI providers that want to put their products into the European market.” Sandboxes will assist startups, and foreign giants will be held to the same standards.
Ultimately, the Act aims not to hinder progress but to shape it. As Benifei notes, AI’s impact is more comparable to electricity than to the internet. And for that, he argues, Europe must lead with clarity, governance, and responsibility.


