President Donald Trump wants AI to have a little more breathing room when it comes to oversight. Trump said on Tuesday the U.S. must have one federal standard for regulating artificial intelligence, saying the technology risked being over-regulated if each American state came up with its own standard.
“Overregulation by the States is threatening to undermine this Growth Engine,” Trump said on social media.
“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes,” he posted.
READ: RegulatingAI and Ai4 to host AI Policy Summit in Las Vegas (August 5, 2025)
The United States has taken a cautious, sector-focused approach to AI regulation to balance innovation with risk management. Federal agencies such as FTC and NIST have issued guidelines to ensure AI systems are transparent, safe, and non-discriminatory. Several states, including California and Colorado, have also introduced laws affecting AI deployment, including transparency and high-risk assessments.
Unlike the EU, there is not yet a comprehensive, sweeping federal AI law comparable to the EU AI Act. While the White House OSTP has published guidance on ethical AI and risk assessment, these standards are not universally mandated across all sectors. Congress has held hearings on AI risks such as deepfakes, bias, and autonomous systems, but no major federal liability or safety legislation has been enacted as of 2025. Overall, the U.S. approach relies heavily on state-level regulations and public-private collaboration to promote AI safety and transparency.
The U.S. approach to AI emphasizes collaboration between federal agencies, private industry, and academic institutions, aiming to foster innovation while mitigating risks associated with advanced technologies. States such as California continue to pioneer regulations that require transparency in AI models, safety incident reporting, and whistleblower protections. However, the scope and timing of future federal legislation remain uncertain, with debates ongoing about whether mandatory federal standards or liability frameworks will be introduced.
In practice, the U.S. regulatory landscape is evolving gradually, balancing technological leadership with public safety, and relying on voluntary standards, guidance, and state-level innovation to guide responsible AI deployment.
In his social media post on Tuesday, Trump urged lawmakers to put the federal standard in a separate bill or include it in the defense policy bill known as the National Defense Authorization Act, or NDAA.
As AI technologies become more integrated into everyday life, the importance of clear and consistent regulatory frameworks grows. Ensuring that AI systems operate safely, transparently, and without bias is critical for maintaining public trust, especially in high-stakes areas such as healthcare, finance, and national security.
READ: ‘Everybody is responsible’: Prudential Financial’s Gaia Bellone on AI integration in finance (July 30, 2025)
State-level innovations, such as mandatory reporting of AI-related safety incidents and whistleblower protections, offer practical examples of how oversight can be implemented effectively without stifling innovation.
At the same time, the ongoing discussions about a single federal AI standard highlight the tension between uniformity and flexibility. While a national framework could streamline compliance and reduce conflicting state regulations, the specifics of such legislation and its potential impact on innovation remain uncertain.


