Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
The pace at which artificial intelligence is developing is simply breathtaking. What started as traditional machine learning has quickly branched out into generative AI, autonomous agents, robotics, and even embodied and space-based AI systems. In an engaging conversation on the “Regulating AI” podcast, the host Sanjay Puri interviewed Frederic Werner, the Chief of Strategic Engagement at the International Telecommunication Union (ITU), to discuss what this rapid development means for the world, particularly the Global South.
Werner’s message was clear and striking: “AI is too important to leave to the experts.”
AI, as Werner said, is a moving target. In a few short years, the emphasis has moved from predictive models of AI to generative AI and now, to AI agents that can make autonomous decisions. Throw in robotics, brain-computer interfaces, and new uses in various industries, and it’s a complex picture.
This is not a path that can be guided by any one government, corporation, or research community.
READ: ‘You are only limited by your imagination’: Rahul Patni on AI governance framework (April 21, 2026)
ITU, through its AI for Good platform, has brought together governments, corporations, research institutions, UN agencies, and youth leaders. The aim is not innovation but responsible innovation.
Because when AI develops at this pace, fragmentation is a danger. Collaboration is a necessity.
One of the key takeaways from the episode is the importance of the Global South. The AI debate is all too often reduced to a U.S.-China-Europe equation. However, Werner stressed that any hope of effective AI regulation and development must involve the emerging economies.
He cited the African mobile payments revolution as a case in point for how the Global South can leapfrog existing infrastructure with its own innovation. AI could follow suit, if countries are enabled to be creators, not just consumers.
However, access is not the same as empowerment. Even if millions of people have AI-enabled devices in their pockets, the question is: Are they using them to build businesses, address community needs, and create value?
The promise of sovereign AI is not yet fully realized without skills, standards, and support.
In terms of AI and the workforce, Werner promotes what he calls a two-brain approach.
The positive brain looks at opportunity: AI opens up new industries and allows people to create things that have never existed before. Rather than using AI to do more of the same thing faster, he encourages people to use it to create new possibilities.
The practical brain, on the other hand, looks at disruption. Initial data on the early labor market indicates that new graduates face a tighter job market, and it is reported that women’s roles in certain industries may be adversely affected, especially in developing countries.
If there is one thing that Werner would like to tell world leaders, it is closing the AI skills gap, “By addressing the AI skills gap, you can make sure that AI skills education somehow becomes part of the curriculum when you’re teaching people, whether they’re young people or the elderly or everyone in between, so that they also have a good chance of using the device responsibly,” he said.
With the help of initiatives such as the AI Skills Coalition, there are now hundreds of courses being made available in various languages to ensure that AI knowledge is accessible to everyone.
READ: ‘If you don’t control AI, it will control you,’ ServiceNow’s Amit Zavery on balancing AI (April 20, 2026)
However, this initiative should not be limited to educational institutions alone. AI literacy needs to be a lifelong process from grade school to government institutions.
Why? Because a lack of literacy makes governance brittle. Because innovation without literacy is not equal.
Finally, the discussion turned to open source AI. Werner believes it is critical to democratization and the goal of sovereign AI but not without danger. Openness and accessibility must be balanced with security and proper use.
Ultimately, the take-away message from Werner was both optimistic and realistic: AI for good is possible but not inevitable. It will depend on who shows up, who is ready, and whether we can choose to cooperate rather than compete.
He added, “And back to the question about the future of work and jobs, I think that’s also linked to what we’re going to teach people in the future… I think everyone is going to have to be a student of AI over time. Then addressing that AI skills issue is probably the most urgent thing for the global leaders to think about today.”
The future of AI is not just a technological issue. It is a collective one.

