By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In an era where artificial intelligence is evolving faster than regulation can catch up, Senior Vice President of Software Engineering at J.P. Morgan & Chase Naresh Dulam brings a grounded yet forward-thinking voice to the conversation.
In a new episode of the “Regulating AI” podcast, Dulam emphasized that the future of ethical AI doesn’t rest on a single institution—it must be a shared responsibility.
READ: You can’t have smart cities without smart transportation: Sanjay Puri (January 19, 2016)
“I feel the accountability is a shared responsibility of everybody. It’s not only the one,” Dulam stressed early in the conversation. He emphasized that accountability in AI is a collective responsibility—spanning model developers, implementers, service providers, and governments and cannot rely solely on regulations but requires proactive ethical engagement from all stakeholders.
For him, the analogy is clear; just as cities require thoughtful planning, AI development demands structured frameworks that prioritize public interest. With the rapid expansion of AI across sectors, Dulam likens the need for regulation to city planning: structured, anticipatory, and inclusive. Without clear frameworks, he warns, AI risks becoming a tool of unchecked harm, especially in high-stakes domains like finance, healthcare, and criminal justice.
As the discussion turned to explainability, Dulam describes it as an essential key in building trust. AI systems often function as black boxes, making decisions that are hard for even experts to interpret. “The explainability should go to the end user who are not aware of this technology or technology terms,” Dulam noted, pointing out the current gap in transparency. He believes that it will allow the users to understand decisions made by algorithms, particularly in sensitive areas like loan approvals and fraud detection.
Equally pressing is the issue of bias in AI. Since AI systems are trained on human-generated data, they often reflect and amplify existing societal prejudices. Dulam emphasized the importance of diverse data sets and human oversight in mitigating biased outcomes. He stressed that we must accept that bias cannot be eliminated entirely but through better design and rigorous accountability, its impact can be reduced.
Dulam advocated for a tiered regulatory framework tailored to the risk level of AI applications, suggesting that startups could follow simpler compliance processes while high-risk systems would require stricter oversight. He believes such a structure balances regulatory demands with innovation, and emphasized that ethical responsibility should guide organizations beyond just meeting legal obligations, encouraging a culture of self-regulation and proactive governance.
One standout idea was the sandbox model—controlled environments where innovators can test AI systems in real-world conditions with regulatory oversight. This, he believes, bridges the gap between experimentation and ethical boundaries.
READ: Regulating AI: Sanjay Puri on policy, challenges, and ethical innovation (November 1, 2024)
Dulam highlighted open-source AI as a key driver of innovation. Its transparency and community-driven nature allow for rapid development, but he acknowledged challenges like inconsistent documentation and data security. He added that in any case, regulation must evolve to support safety without stifling creativity.
Beyond governance, Dulam pointed to a major transformation in the workforce. As AI systems increasingly automate tasks, companies are hiring fewer people for roles once considered essential. His call to action: governments must invest in upskilling and reskilling programs to prepare workers for this AI-driven future. He urged public policy support that governments should implement upskilling and reskilling programs to support workers affected by AI automation
In a world where AI continues to outpace laws and norms, Dulam’s message is clear: AI governance must be a shared, transparent, and inclusive effort; only then can we ensure its benefits are justly and widely distributed.
