Sanjay Puri is a business leader and entrepreneur with decades of experience, recognized for his pivotal work at the intersection of AI regulation and U.S.-India relations.
As the founder of Autonebula, an innovative AI-based mobility incubator, and Chairman of U.S.-India Political Action Committee (USINPAC), Puri has played a key role in shaping policy and enhancing ties between the two nations. His leadership at the Alliance for United States India Business (AUSIB) has facilitated over $300 million in trade transactions, further strengthening economic connections.
Puri’s contributions have earned him a place in the U.S. Congressional Record, highlighting his advocacy and influence. With an MBA from George Washington University, his diverse achievements range from securing billion-dollar government contracts to engaging in meaningful philanthropy.
As a speaker, he has shared his expertise on global stages, including Harvard and Wharton. His insights can also be heard through his podcasts – Regulating AI, Indianness, and Chief AI Officer (CAIO) – where he holds critical discussions surrounding culture and technology.
In an exclusive interview with The American Bazaar, Puri discusses expertly on AI regulation and dives deeper into policymakers’ perspectives, global challenges, big tech, data centers, hype cycles, and more. The interview has been edited for clarity.
The American Bazaar: What are some key reasons for the regulation of AI, and why do you think it is essential to have some form of regulation when it comes to AI?
So firstly, this is probably the most transformative technology of our generation, maybe even the future generation, and the impact that it is having and will have on areas like healthcare, education, and national security is going to be immense. For any transformative technology, there is a need for safeguards. For example, a cancer drug goes through a process with the FDA [Food and Drug Administration], which ensures that it’s safe by making it go through clinical trials. Similarly, when you fly a plane, the FAA [Federal Aviation Administration] ensures that it is safe.
The idea is that for a transformative technology like AI, we need some safeguards — not to stop innovation, but to provide a consistent set of rules that AI developers can follow. This allows for responsible innovation without halting progress.
You have a podcast called Regulating AI. Could you share any learnings or discoveries you’ve made through the podcast, and what motivated you to start it?
The motivation was that while there was a lot of dialogue around innovation, technology, and machine learning, there wasn’t much discussion about the broader frameworks of policy, governance, and responsible AI. We wanted to make sure that we had a global discussion on these broader topics as companies race to implement AI.
It’s important to bring different people into this conversation, not just CEOs of large companies who have a vested interest. We also wanted to avoid the risk of AI becoming monopolized by a few big companies, as seen happened with social media. In the podcast, we bring lawmakers, think tank experts, civil society leaders, and more to have a diverse and global dialogue. This includes ensuring that AI respects regional and cultural differences, as a lot of data primarily comes from English and Western perspectives. The goal is to create a fair and responsible AI regulation framework.
You also have a nonprofit, Regulating AI, which is a 501(c)(3). Can you talk about the structure of the nonprofit and what it aims to accomplish?
Regulating AI is a nonprofit 501(c)(3) organization in the United States with a primary focus on education and awareness. Lawmakers have to deal with a multitude of issues, from international conflicts to border security, and now they’re expected to understand AI as well. Our role is to help educate them on the implications of AI. We do this through congressional briefings, hearings, policy papers, events, and webinars. Our mission is global, and we’ve conducted activities in the EU and have plans for other regions. The focus is to build awareness among decision-makers and the public.
Do you think there is enough awareness among lawmakers about AI and its potential dangers?
There is a growing awareness. For instance, in the U.S., there’s an AI caucus, and Senator Schumer has held listening sessions with key senators, many of whom have been on our podcast. Globally, this topic comes up frequently at events like the UN General Assembly, G7, G20, and Davos. AI is also seen as a national security issue. However, while awareness is rising, the depth of understanding varies. We also work with congressional staffers since expecting every lawmaker to become an AI expert is unrealistic. Educating their staff is part of how we address this knowledge gap.
How can education play a role in ensuring responsible AI development?
Education is fundamental to responsible AI development. Many people, including lawmakers and the general public, don’t fully understand AI.
You mentioned the U.S. and the EU have different approaches to regulation. How would you compare the two?
There’s an expression that the EU regulates while the U.S. innovates. The EU has already implemented the GDPR [General Data Protection Regulation] for privacy and recently passed the EU AI Act, which categorizes risks and applies restrictions accordingly. In contrast, the U.S. has had over 100 AI-related bills in Congress, but none have been passed. We have an executive order, a Bill of Rights, and a safety institute at NIST [National Institute of Standards and Technology], but no comprehensive AI legislation. The U.S. operates in a polarized political environment, which complicates the process. We’re unlikely to see a comprehensive AI bill; instead, we’ll probably see incremental regulations focused on specific areas like deepfakes.
Part of the challenge may also be the influence of big tech. Do you think that affects AI regulation?
Definitely. Many large tech companies have actually called for regulation, but some small companies worry that big tech’s push for regulation could be a way to create barriers for smaller players. The major AI companies are still based in the U.S., so there’s a concern about restricting their innovation, especially in comparison to countries that might not share our values. This national security concern is very real and adds to the complexity of regulation.
What does success look like for Regulating AI, both in the short term and long term?
For us, success is centered on education and awareness. In the U.S., if we can get issues like deepfakes, discrimination, and hiring bias on the legislative table, that would be a win. We’re not lobbying, but we aim to educate decision-makers on these core issues. With the next Congress likely to have many new members, we want to ensure they understand AI’s opportunities and challenges. Internationally, we also hope to create a global framework so that companies don’t face a fragmented regulatory landscape, which would stifle innovation.
Recently, we’ve seen a surge in venture capital investment in AI startups. What do you think are the potential implications of this trend?
AI needs significant investment due to its requirements for data, fast processing chips, and energy, all of which are resource-intensive. While there is a risk of irrational exuberance, like in any innovation cycle, the investments are fueling incredible opportunities. The innovations coming in fields like medicine, education, and mental health are remarkable. The ability to create personalized learning and healthcare solutions is just one example of the immense potential of AI. Although some startups will fail, those that succeed will have a transformative impact.
You mentioned mobility and autonomous vehicles are also part of the AI innovations. Could you elaborate on your accelerator, Autonebula, and its objectives within the AI landscape?
Autonebula focuses on AI applications in mobility, particularly for autonomous transportation. We support companies working on software and technologies that enable autonomy in vehicles, whether for cars, commercial trucks, or other vehicles. The goal is to transform how people and goods move from point A to point B. We’ve worked with around 25 companies, many of which have made significant progress. Mobility is changing dramatically due to AI, and we see a bright future for the applications these companies are developing.
READ: You can’t have smart cities without smart transportation: Sanjay Puri (January 19, 2016)
Do you think the need for AI regulation extends to areas like autonomous transportation?
Absolutely. Autonomous vehicles still need to adhere to traffic rules, and there must be clarity on accountability in case of accidents. The challenge with autonomy is not the technology but the regulatory patchwork across states and countries. Consistent regulation is essential, especially for commercial vehicles, where there’s a growing shortage of drivers. AI can help address this gap, but it needs to be regulated to ensure safety and compliance.
With the massive growth in data centers, especially in places like Loudoun County, do you think there’s a need for innovation in energy solutions for AI?
Yes, AI is highly energy-intensive, and with its growing usage, we need sustainable energy solutions. Data centers consume significant amounts of energy, especially for cooling systems. There’s an opportunity for innovation here, including exploring micro-nuclear options. Additionally, not all tasks need large language models; small language models tailored to specific needs can save energy. Being mindful of energy consumption in AI usage is crucial for sustainability.
What makes AI different from previous tech hype cycles, like the metaverse or crypto?
This isn’t a hype cycle; it’s a true transformation. AI will impact every industry, from healthcare to education to national security. In my lifetime, I believe AI will enable access to healthcare for all, and every child will have access to a personalized tutor. This technology has the potential to change the world in unprecedented ways, which is why regulation and a thoughtful approach are so important.
Considering the exponential growth and influence of AI, do you see any challenges related to cultural or language preservation?
Yes, absolutely. One of the significant risks with AI is that it could unintentionally favor Western, English-speaking cultures due to the data it’s trained on, which is primarily in English. This could lead to a loss of regional languages and cultural nuances, especially for languages with limited online presence. AI systems trained predominantly on Western data might not be sensitive to the histories and cultural specifics of other regions. It’s essential to ensure that AI systems respect and integrate diverse linguistic and cultural contexts. Some countries are already working on developing their own language models to maintain cultural sovereignty and ensure AI reflects their unique heritage.
Is there a risk that AI could become dominated by a few major players, similar to social media?
Yes, and that’s a significant concern. Just like with social media, where a few big companies dictate the landscape, there’s a real danger that AI could follow a similar path. This would concentrate control over such a transformative technology in the hands of a few companies, which could have far-reaching consequences. That’s why it’s essential to encourage a diverse and competitive AI ecosystem. Ideally, smaller companies would also have opportunities to innovate without being overshadowed by regulatory requirements tailored to fit the resources of large corporations. Regulatory frameworks need to be crafted carefully to avoid inadvertently stifling smaller players in the AI field.

