California Governor Gavin Newsom signed an executive order on Monday that requires firms seeking contracts with the state to provide safeguards against AI misuse, including the generation of illegal content, harmful bias and violations of civil rights.
As per the guidance issued by the state, agencies will have to watermark images or videos that may be generated through AI as per the guidance issued by the state. Companies hoping to sign contracts with the state of California will also have to show they have policies to keep AI from distributing child sexual abuse material and violent pornography.
They will also show how their models avoid incorporating “harmful bias” and detail policies aimed at avoiding “unlawful discrimination, detention, and surveillance.” The order directs the state to come up with best practices for watermarking AI-generated or -manipulated images and videos.
READ: Tesla escapes California suspension by removing ‘autopilot’ term (February 18, 2026)
“California’s always been the birthplace of innovation,” Newsom wrote in a statement. “But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk.
“California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way.”
This comes in spite of the White House issuing a national policy framework for AI in December 2025 that discouraged states from passing such regulations. “To win, United States AI companies must be free to innovate without cumbersome regulation,” the framework under Trump’s executive order reads. “But excessive state regulation thwarts this imperative.”
Trump’s order directed the Justice Department in December last year to establish an “AI Litigation Task Force” to challenge state AI regulations.
READ: Dancing humanoid robot goes rogue in California restaurant (March 20, 2026)
The California executive order also stated that if the federal government labels a company as a supply chain risk, California will conduct its own assessment and may allow it to remain a contractor if it does not find it to be a risk. This comes after prominent AI company Anthropic was designated a “supply chain risk” by the White House.
Within 120 days, California’s Department of General Services and Department of Technology will submit recommendations for new AI‑related vendor certifications which would allow firms to attest to responsible AI governance and public‑safety protections. The order highlights California’s efforts to maintain an independent stance despite efforts by some Republican lawmakers for it to defer to federal authorities on law and regulation.
In February, California Attorney General Rob Bonta told Reuters his office is developing its in-house expertise through its “AI oversight, accountability and regulation program.”


