By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In a world where artificial intelligence evolves faster than the laws that govern it, JoAnn Stonier, a fellow at MasterCard, is charting a path toward responsible innovation. As generative AI reshapes industries with its problem-solving prowess, Stonier warns that static regulations are struggling to keep pace. She said in a recent episode of “Regulating AI Podcast,” highlighting a critical challenge, ensuring AI benefits society without amplifying inequities that the technology is dynamic, but governance often isn’t.
Speaking on the podcast, MasterCard’s Chief Data Officer and AI Ethics Fellow JoAnn Stonier, unpacked the deep complexities behind regulating generative AI, a rapidly evolving branch of artificial intelligence with the power to transform industries. “Generative AI is a real pivot point… Technologically, both for risk and innovation,” she said.
Stonier’s work focuses on creating adaptable frameworks that balance innovation with accountability. She points to data equity as a cornerstone of ethical AI. Biased datasets, often skewed by limited geographic or demographic scope, can perpetuate unfair outcomes. A striking example she shares is a tree-planting initiative that backfired on a village due to inadequate local data. “The data is gonna get better and better the more we’re aware,” she noted. “That’s how bias will get minimized in the algorithmic outcomes.” To counter this, she advocates for organizations to collaborate with data scientists and communities to ensure datasets reflect the people they serve.
That chasm is growing, and the consequences are tangible. Static regulatory frameworks are struggling to keep up with technologies that learn and adapt in real-time. The result? A global race to define what responsible AI governance should look like, and who gets to set the terms. For Stonier, the answer lies not in rigid, one-size-fits-all policies but in adaptable, hybrid frameworks that combine broad oversight with sector-specific nuance. She mentioned that we need contextual definitions of harm and regulatory models that evolve as the technology does.
READ: ‘The future is now’: Congressman Gabe Amo on AI policy, equity and education (April 2, 2025)
A core concern for Stonier is data equity; the idea that AI systems should reflect the diverse realities of the people they affect. Too often, she explained, algorithms trained on skewed or incomplete datasets perpetuate bias and inequality. She said that they have seen cases where well-intentioned AI applications caused harm simply because the data used didn’t reflect the lived experiences of local communities, citing a tree-planting initiative that inadvertently disrupted a village ecosystem due to poor data inputs.
To prevent such missteps, Stonier advocates for a toolkit approach that organizations can use to design equitable systems. This includes evaluating datasets for bias, ensuring representation from marginalized groups, and consulting with communities throughout the development cycle. Synthetic data, she acknowledged, can help fill information gaps, but it must be used with caution.
Transparency is another pillar of Stonier’s vision. She explains if stakeholders can’t understand how AI systems work, trust erodes. MasterCard’s internal governance model, which integrates privacy, security, and ethical reviews, sets a high bar for accountability. By fostering clear communication about AI processes, organizations can build confidence among users and regulators alike. “Being transparent and explainable and accountable is really important for making sure your partners, your customers, and your employees and your regulators understand what you are doing, and if you can’t explain it.”
READ: California lawmaker Ted Lieu discusses AI regulation and legislative efforts (April 7, 2025)
As AI reshapes job roles, particularly in privacy and data governance, the demand for professionals who understand both the technical and ethical dimensions of AI is skyrocketing. Stonier affirms that we need leaders who can translate between data science and boardroom strategy. But unfortunately, right now, those people are hard to find.
Despite the challenges, Stonier remains optimistic. For her, AI’s potential to solve complex problems is undeniable, but so is its capacity for unintended consequences. By prioritizing data equity, transparency, and adaptive regulation, she envisions a future where AI serves all of society, not just a privileged few. The key, she believes, lies in collaboration between governments, corporations, academia, and the public.
Her call to action is clear: engage, collaborate, and build AI that reflects the world we want to live in.


