By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
As artificial intelligence races ahead of lawmaking, Nebraska Attorney General Mike Hilgers is sounding an urgent warning: innovation cannot come at the cost of children’s safety. Speaking on the “Regulating AI” podcast, Hilgers laid out how AI has fundamentally altered the landscape of child sexual abuse material (CSAM), demanding faster and smarter legal responses.
“You start to put all those things together, the desire to solve problems, the desire to protect children, protect Nebraskans, seeing this emerging technology and just saying, hey, we’ve got it. We really have to act and we have to act now. There’s almost a moral imperative to do so,” Hilgers said, explaining how his background as an entrepreneur, IP lawyer, legislator, and now chief law enforcement officer converged around this issue.
In a pre-AI world, CSAM was typically tied to an identifiable child. AI has shattered that framework. “In the AI world… the images themselves may not be directly tied but certainly are derived from, because these models are trained on the actual images of real children, but also the volume is so high,” Hilgers noted. That scale overwhelms law enforcement and exposes gaps in laws written for an earlier era.
To address this, Nebraska passed LB 383, a landmark law explicitly criminalizing AI-generated CSAM. Without such updates, Hilgers warned, prosecutions become harder and investigations risk stalling. “As I mentioned these models are certainly trained on child images, the laws need to catch up… unless we modify our criminal statute in order to cover CSAM, AI-generated CSAM, then without that, it might make it more difficult for us to even investigate these things,” he said. The bill has since become a model he urges other states and countries to examine.
Hilgers’ leadership extended beyond Nebraska. He spearheaded a bipartisan coalition of 54 U.S. attorneys general calling on Congress to create an expert commission on AI-enabled child exploitation. The reason such broad consensus was possible, he argued, is simple: “When you take the politics out of it, and you just look at it… let’s actually help protect kids. And, in applying some limited regulation to help really make sure that this particular use of AI is curtailed.”
Yet Hilgers is careful not to frame AI solely as a threat. He repeatedly emphasized restraint in regulation, warning against stifling innovation in a global race dominated by the U.S. and China. He warns that overburdening the regulatory framework could increase the risk of falling behind in the race. Citing past missteps like the U.S. Child Online Privacy Protection Act, he cautioned that poorly designed rules can lock in harm for decades.
READ: Brando Benifei on building the world’s first AI law: Inside Europe’s regulatory revolution (
On the question of who should lead AI regulation, Hilgers surprised some listeners by backing federal leadership over a patchwork of state laws. “You really need federal regulation is the means of commerce across states,” he argued, warning that one aggressive state could end up dictating national policy.
Despite his pro-innovation stance, Hilgers made clear that big tech should not operate unchecked. “I do not trust a total free-for-all for a lot of these big tech companies,” he said, pointing to consumer protection, privacy enforcement, and structural safeguards as essential tools.
Ultimately, Hilgers framed AI governance as both a risk and an opportunity. “I think it is a once in a multi-generation opportunity,” he said, one that demands urgency, humility, and bipartisan resolve.
As AI continues to evolve at breakneck speed, his approach offers a blueprint for protecting the vulnerable without slamming the brakes on the future.

