Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
Can a computer truly teach a child to read or could it slowly teach them to stop thinking?
That was the central question in a thought-provoking conversation between host Sanjay Puri and Thomas Davin, Global Innovation Director at UNICEF, on the “Regulating AI” podcast. Recorded at the India AI Impact Summit, the discussion tackled one of the defining challenges of our time: how to harness generative AI for children’s benefit without compromising their cognitive, emotional, and social development.
With more than a billion children growing up in an AI-enabled world, the stakes could not be higher.
Davin was clear: AI is not just a “shiny new toy.” In the right hands, it could be a powerful instrument of inclusion.
Today, nearly 70% of children worldwide struggle to summarize a simple text they’ve read in class. Meanwhile, 260 million children remain out of school entirely. AI-enabled tools from adaptive tutoring systems to speech-to-text interfaces for visually impaired learners could help bridge that gap.
For a child in rural India who never completed primary school, an AI tutor isn’t just software. It can be a pathway to literacy, skills, and eventually, dignified employment. For children with disabilities, personalized and accessible learning tools could finally level a playing field that has long been uneven.
READ: ‘Trust is infrastructure’: Raju Narisetti on the future of AI (April 29, 2026)
In that sense, AI holds the promise of becoming education’s greatest equalizer.
But the conversation did not shy away from the risks.
Davin voiced a serious concern: the danger of raising a generation that becomes overly dependent on AI systems. Just as social media has altered attention spans and focus, excessive reliance on generative AI could reshape how children think, learn, and solve problems.
The goal, he argued, is not automation of thinking but augmentation of learning.
UNICEF advocates for a “human-in-the-loop” approach. Especially for younger children, “AI should be derived and managed by the human in the loop, a teacher, a facilitator, an adult, not by the children themselves… We need to look at AI literacy for children to understand very early on the power and the risks and the limits of that technology and what that represents.”
Technology can assist. It cannot replace human mentorship.
One of the most sensitive topics in the discussion was AI chatbot companions.
On one hand, they can serve as an early support system. With an estimated 87% of adolescents’ mental health challenges going undetected globally, conversational AI could act as a first line of engagement, identifying distress signals early.
On the other hand, there is a risk of deepening isolation. If children find “sufficient engagement” in digital companionship, the social fabric of childhood friendships, play, and human connection may erode.
The challenge is balance: using AI to expand support systems without replacing human relationships.
READ: Can the Global South lead in AI? Frederic Werner on addressing AI skills gap (April 23, 2026)
When the conversation turned to regulation, nuance became the central theme.
Some countries have considered sweeping restrictions or blanket bans on children’s access to digital platforms. But Davin argued that such one-size-fits-all approaches may do more harm than good.
In a country as diverse as India, the appropriate age and level of AI interaction may vary dramatically between urban and rural communities, between socioeconomic groups, and between educational contexts.
A policy designed to protect a child in a metropolitan household could unintentionally deprive a teenager in a remote village of their only access to skill development and job training.
Smart regulation must be context-aware, data-driven, and flexible.
As the episode concluded, Davin offered a powerful reminder: children should not merely be passive users of AI, “… having children as not just users but shapers of that technology and actors of that governance.”
While AI development may feel unstoppable, its guardrails are still being built. Safety, inclusion, and equity can and must be designed into these systems from the start.
The conversation highlighted that the question is not whether AI will shape the next generation. It’s whether we will shape AI in a way that ensures no child, especially the most vulnerable, is left behind in the process.

