By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In a world racing to keep pace with artificial intelligence, Eric Loeb, executive vice president at Salesforce, is calling for something often overlooked in high-tech boardrooms collaboration.
In a recent episode of “Regulating AI” podcast, Loeb emphasized the urgent need for a multistakeholder approach to AI policy, one that bridges industry, government, civil society, and academia. According to him, it’s not just a nice-to-have; it’s critical to unlocking AI’s potential for societal benefit. He argues that building walls between sectors is no longer viable; instead, the focus must shift toward fostering trust.
Salesforce, a frontrunner in enterprise AI, has long been investing in responsible technology. Imagine a digital agent scheduling your meetings, analyzing data, or even negotiating deals all without human intervention. Salesforce is leading the charge in the agentic AI revolution, where machines don’t just assist but act independently.
READ: Regulating AI: Sanjay Puri on policy, challenges, and ethical innovation (November 1, 2024)
Loeb warns, with great power comes great responsibility. “From a Salesforce standpoint, it is absolutely humans and agents working together. It is about humans at the centre and augmentation of the human work and the human task with an agent. And we do not foresee a replacement or an elimination,” he said.
But navigating the global AI landscape is no easy feat. He draws attention to the regulatory divergence between regions like the EU and the U.S. While Europe moves ahead with comprehensive frameworks like the EU AI Act, the U.S. remains patchy, putting pressure on companies to navigate a maze of rules.
For smaller firms especially, Loeb says, “…having that lattice of harmonization and interoperability, it’s a very good thing.” Loeb advocates for harmonized policies to foster interoperability and trust, ensuring AI systems meet stringent standards for accuracy, privacy, and security.
He emphasized, “we always have led with our values of trust, customer success, innovation, sustainability, equality, and we’re going to keep doing that as we work on these rapid changes over the upcoming years.”
Loeb urges governance beyond compliance, highlighting Salesforce’s agentic AI, which prompts urgent questions on workforce integration, ethical responsibility, and purposeful oversight in AI deployment. His advice? Don’t wait for crises. Define internal values now, or risk falling behind both ethically and competitively.
The conversation shifts from code to culture. With global interest in AI skyrocketing from the Gulf to South Asia, Loeb underscores that meaningful progress demands culturally and economically contextual AI models. He stresses the need for flexible policies and strong local engagement, as a uniform approach cannot address diverse regional needs.
What makes Salesforce’s approach stand out isn’t just its innovation, it’s the conscious choice to innovate responsibly. Whether it’s tailoring policy to sector-specific risks in healthcare or shaping dynamic regulatory frameworks through partnerships with bodies like NIST, Loeb believes adaptive governance is the only way forward.
READ: You can’t have smart cities without smart transportation: Sanjay Puri (January 19, 2016)
In this brave new world, Loeb is clear: AI isn’t just a technological revolution, it’s a societal one. And if we want AI to work for everyone, then everyone must have a seat at the table.
In the rush toward tomorrow, Salesforce is betting big on trust, transparency, and togetherness. And in the unpredictable terrain of artificial intelligence, that may be the most powerful innovation of all.
