By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In a recent episode of “Regulating AI” podcast, Dr. Cari Miller sat with host Sanjay Puri and unpacked the complexities of the U.S. AI Action Plan, outlining its global implications for AI governance and the delicate balance between innovation and regulation.
Miller, head of AI Governance and Policy at The Center for Inclusive Change and executive director of the women-led nonprofit AI Procurement Lab, highlighted both the urgency of regulation in high-risk areas and the growing role of procurement in shaping responsible AI practices.
While talking about the first major pronouncements of the AI Action Plan, Miller stressed at the outset, “…fundamentally it has three pillars. The first pillar is about innovation. The second pillar is about infrastructure. And the third pillar is about international diplomacy.”
She noted that while the executive order emphasizes infrastructure development, particularly energy supply to support AI, the larger goal is positioning the U.S. as a global leader in AI through collaboration and secure distribution.
“Right out of the gate, the first bullets in Pillar One are about making foundation models have free speech,” Dr. Miller said, highlighting that this may raise risks when compared to stricter frameworks like the EU AI Act. Adopted in 2024, the EU AI Act imposes binding obligations on high-risk systems, including biometric surveillance and AI in employment and education, with fines of up to 7% of global turnover for noncompliance.
Miller explained that the current legislative framework in the United States is still in a developmental phase and lacks mandatory regulations. This creates risks where companies are encouraged to innovate without adequate safeguards. The absence of consistent rules also complicates the legal landscape, with existing anti-discrimination laws potentially undermined by new directives.
A central theme of the discussion was the tension between innovation and regulation. Proposals such as a moratorium on state-level AI legislation, Miller warned, may weaken states’ ability to address unique local challenges.
She argued that balancing innovation and regulation in AI procurement is crucial. Miller said, “The more irreversible the harm is, the more it’s appropriate to regulate. The less the less or the more you can reverse it, you shouldn’t have to regulate it.”
Drawing comparisons with the European Union’s stricter AI framework, Miller explained, “…the EU AI Act is a risk-aware piece of legislation. There are requirements and consequences for not following requirements. This is an action plan. It is not set in law yet. It’s not even a regulation. It is a voluntary kind of a notional.”
READ: Equitable AI implementation must be deliberate, Nishtar and Sands say on global health (
Procurement, Miller pointed out, offers a vital tool in this balancing act. From defining acceptable levels of AI errors like hallucinations to clarifying data ownership in interdepartmental agreements, procurement practices help embed governance from the ground up. She emphasized, “there’s a lot of security awareness in that. But certainly not for bias mitigation…”
Yet, current procurement frameworks often fall short. “They should not only list questions but also explain their significance and provide benchmarks for evaluating responses,” Miller observed, underscoring the importance of vendor assessments and third-party validation.
Diversity in procurement teams also matters, as engaging diverse voices ensures that procurement decisions consider cultural sensitivities and legal implications effectively. She said, “You shouldn’t be doing this alone. This is a team sport. So when you do an impact assessment… you should have more people at the table than just a procurement person… we need to be culturally sensitive because the system is gonna impact these kinds of people.”
Looking ahead, Miller highlighted both opportunities and risks in emerging areas such as AI agents and synthetic data. With many organizations experimenting without formal evaluation, she cautioned that governance and liability remain unclear. At the same time, synthetic data especially in sensitive fields like healthcare demands rigorous cleansing and bias checks to avoid harmful outcomes.
She concluded by saying that the effectiveness of processes relies heavily on the people involved and organizations must ensure their teams are literate about the data and processes if we want responsible AI to succeed.
