By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In the latest episode of the “Regulating AI” podcast, host Sanjay Puri sat down with Rob T. Lee, Chief AI Officer and Chief of Research at the SANS Institute, widely regarded as the “godfather of digital forensics.” Lee, who also serves as the amicus technical advisor to the U.S. Foreign Intelligence Surveillance Court, offered a frank assessment of the White House’s newly released AI Action Plan, which focuses on innovation, infrastructure, and international diplomacy.
While Lee praised the plan for advancing AI governance, he warned that it does not fully address an urgent reality: America’s AI systems are already under attack. “…what ends up happening with adversaries and nation state attackers and organized crime? They’re not restricted by any regulation. They’re not restricted by any guardrails or safety protocols, privacy that are going to hold them back,” he explained. This imbalance, he said, could give attackers the speed and freedom to bypass traditional cyber defenses, which are ill-equipped to detect AI-specific threats like data poisoning and prompt injection.
READ: RegulatingAI and Ai4 to host AI Policy Summit in Las Vegas (August 5, 2025)
Drawing on examples such as the “Volt Typhoon” operation, Lee described a shift from classic data theft to stealthy AI-enabled infiltrations designed to prepare for future disruptions. “Especially if they’re starting to build reasoning capabilities that will be able to detect the different capabilities or defenses in your network, subvert them, not within weeks, but within minutes or seconds,” he cautioned.
The conversation also touched on the tension between innovation speed and security. While some argue for strict controls before AI deployment, Lee believes over-regulation risks stifling progress. He said that it’s more dangerous to fall behind, urging leaders to adopt security frameworks from organizations like NIST, SANS, and OWASP, while still leaning forward on AI adoption.
Open source AI, another focal point of the discussion, offers both transparency and risk. Lee acknowledged that open models allow for greater auditing but warned that without rigorous vetting, they could hide malicious code.
READ: ‘Everybody is responsible’: Prudential Financial’s Gaia Bellone on AI integration in finance (July 30, 2025)
A recurring theme was the acute skills gap in AI security. Lee stressed that AI expertise should not be siloed in technical teams but cultivated across all organizational roles from executives to HR. “It is similar to being in the late nineties and trying to set internet,” he said. “Everyone from executives to the board, to the security personnel need to start working with AI daily to explore, to learn, and therefore come to their own realization of how best to use it in your own function.”
On federal adoption, Lee noted a shortage of AI-literate talent in government, recommending performance reviews include AI literacy goals. He also called for stronger procurement guardrails, including security culture checks, data provenance verification, and mandatory red-teaming.
Despite highlighting vulnerabilities, Lee’s outlook remains optimistic. He believes that every technological leap brings risk but also unprecedented capability, likening AI’s potential to breakthroughs in aviation, nuclear power, and personal computing. His advice to policymakers: spends 30 minutes a day engaging with AI tools to build personal understanding, which in turn enables sharper, more informed decisions.
In an era when both innovation and security are racing against time, Lee’s message is clear winning the AI race requires not just building faster, but building safer.

