Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
What are the implications when artificial intelligence goes beyond organizing knowledge and actually begins to reshape it?
In a very insightful episode of the “Regulating AI” podcast, the host, Sanjay Puri, engages in a discussion with a veteran journalist and a global media leader, Raju Narisetti, on the real puzzle of regulating AI in the modern era: whether AI can help improve open knowledge or if it will silently undermine it.
Narisetti doesn’t mince words. “We are treating AI-made information like it’s free, but the bill will come due in trust.”
The most perilous part, he says, is not the showy hallucinations. It’s something more insidious. People stop fact-checking. Institutions stop spending on fact-checking. And confidence starts to trump facts. Before we know it, we’re left living in a world where plausibility triumphs over proof.
AI makes the cost of information creation lower. But it also makes the cost of misinformation creation lower. That’s the double-edged sword that defines this moment in time.
READ: Can the Global South lead in AI? Frederic Werner on addressing AI skills gap (April 23, 2026)
But Narisetti is not entirely pessimistic. The solution to the problem of information decay can also scale: better systems of provenance, better incentives for quality, and a cultural shift that encourages us to “show our work” again. AI, he says, can help restore trust, if we choose to make trust the product, not the byproduct.
One of the most interesting parts of this conversation is about language equity. Of the 7,000 languages spoken around the world, only 10 languages comprise 82% of the internet’s content. This is what happens when AI is trained in the majority languages.
When a language is not represented on the internet, it is AI-less.
According to Narisetti, multilingual design cannot be an afterthought. It has to be a foundation. Helping knowledge ecosystems like Wikipedia in hundreds of other languages is not charity, it is infrastructure. If the AI industry benefits from open knowledge, they have to give back to make it stronger.
We risk depleting the commons without replenishing it.
Based on his experience as a global organization advisor, Narisetti points out the largest gap between the hype and reality of AI. Executives speak about models, demos, and pilots. But the real benefit lies in something much less exciting: workflow redesign, data cleansing, establishing governance guardrails, and change management.
“The model is the easy part, the operating model is the hard part. AI is mostly a business transformation problem, not necessarily a technology problem.”
AI is more than a technology upgrade. It’s a business transformation challenge. Businesses that approach AI as a “plug-in” tool are disappointed. Businesses that transform decisions, incentives, and the role of people are seeing the impact.
Narisetti is clear that emerging economies cannot remain just data sources and customer bases.
“It’s not glamorous, but in the AI era, trust is infrastructure and infrastructure is what makes progress durable. The Global South can’t just be training data and customers, it has to be a co-creator.”
READ: ‘You are only limited by your imagination’: Rahul Patni on AI governance framework (April 21, 2026)
India, he says, has a special chance. With its size and creativity driven by constraints, it can show the way to develop multilingual, affordable, and practical AI. The aim is not to export AI. It is to develop contextual AI.
Availability of shared compute resources, datasets governed by local rules, and AI literacy are the keys to making inclusion substantive, not just a slogan.
If Narisetti had two minutes with world leaders, his counsel would be this: “I would say, please stop treating AI as a trophy and start treating it as critical infrastructure.”
This means building systems that can say, “I don’t know.” This means building for high-risk edge cases first. This means aligning business incentives so engagement isn’t the price of truth.
By 2030, commodity content will be cheap. What will be expensive is “truth with receipts.”
The future of AI, as described by Narisetti, will not be determined by the size of the models. It will be determined by whether we can create systems where errors can be traced, where corrections can be seen, and where trust can be lasting.
Trust is not optional in the age of AI. It’s infrastructure.

