California State Senator Scott Wiener has made big amendments to his AI bill, SB 53. If enforced into law, this will be the first law in the U.S. that requires major AI companies to be more open about how they operate.
These changes are based on suggestions from a special team set up by Governor Gavin Newsom. So as to build public trust by making sure powerful AI systems are being developed and used responsibly. The Working Group put out their final report. They didn’t back any specific legislation, but they did push for a “trust, but verify” approach, without slowing down innovation.
Now it would require big AI companies such as OpenAI, Google, Anthropic, and Meta to be more transparent. They’ll need to share how they’re keeping their AI systems safe and secure. If something serious happens like the AI being used to build weapons, a major hack, or the company losing control of a system — they must report it to California’s Attorney General within 15 days.
Until now, companies only did this kind of thing voluntarily. But if this bill passes, it becomes law, making the rules clear and equal for everyone in the industry.
READ: The angel investor who built a gaming unicorn says the AI hype will make many people lose money (June 17, 2025)
Wiener makes it clear that this law is aimed only at the biggest, most advanced AI companies, not smaller startups or open-source projects. The Attorney General can adjust the rules as the tech evolves. If a company breaks the law, they could face civil fines, but the bill doesn’t introduce any new rules about who’s responsible if AI causes harm.
To keep innovation going strong, the bill keeps CalCompute in place — a UC-backed cloud platform that gives startups and researchers free or low-cost access to powerful computing. “SB 53 retains provisions — called “CalCompute” — that advance a bold industrial strategy to boost AI development and democratize access to the most advanced AI models and tools, “the press release stated.
It also protects whistleblowers, so if someone inside an AI lab spots a serious risk and speaks up, they’ll be legally protected.
“As AI continues its remarkable advancement, it’s critical that lawmakers work with our top AI minds to craft policies that support AI’s huge potential benefits while guarding against material risks,” said Senator Wiener. “Building on the Working Group Report’s recommendations, SB 53 strikes the right balance between boosting innovation and establishing guardrails to support trust, fairness, and accountability in the most remarkable new technology in years.”
This bill comes just after the U.S. Senate voted 99–1 to let states make their own AI rules, instead of having one federal law for everyone.
READ: From mainframes to music: Pradeep Udhas on reinvention, relevance, and the rhythm of business (June 30, 2025)
“California’s SB 53 is a thoughtful, well-structured example of state leadership. It provides a blueprint that other states can follow—and that could one day shape national policy. We cannot afford to wait. Responsible state-level legislation is our best immediate option for ensuring that AI is developed in ways that align with public safety, democratic values, and long-term human interests, ” said Geoff Ralston, Founder of the Safe Artificial Intelligence Fund (SAIF).

