Monday, anthropic Announced An official approval of Sb 53A bill of the California of the Senator of the State Wiener that Wow imposes the first level transparency requirements on the great development of the artificial intelligence model in the world. The approval of anthropic marks a rare and important victory for SB 53, at a time when the main technological groups such as the Consumer technological association (CTA) e Chamber for progress They are pressing the account.
“While we believe that the safety of the frontier is better added to the federal level of the instant instant of state regulations, powerful progress of artificial intelligence not for consent to Washington,” said Anthropic in a blog post. “The question is not that he needs the Governance of the AI - it is that we will not develop it thoughtful today or tomorrow reactively. SB 53 offers a solid path to the form.
If exceeded, Sb 53 It would require developers of frontier artificial intelligence models such as Openii, Anthropic, Google and Xai to develop security paintings, as well as public safety and safety relationships before implementing powerful artificial intelligence models. The bill would also have established protections for informants to employees who present themselves with security problems.
Senator Wiener’s bill focuses specific on the limitation of artificial intelligence models from contributing to “catastrophic risks”, which the bill defines the death of at least 50 people or more than a billion dollars of damage. SB 53 focuses on the extreme side of artificial intelligence models to limit the risk of AI from being to provide assistance at the level of experts in the creation of biological weapons or which is used in cyberacks-rather compared to more short-term concerns such as deepfakes or sycofancy.
The Senate of California has approved an previous version of SB 53, but still must keep a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has remained silent so far on the bill, although he The last security invoice of the Senator of Senator Weiner SenatorSB 1047.
Bills Regulation Frontier at Model Developer Significant pushback Both from Silicon Valley and from the Trump administration, which both claim that these efforts could limit American innovation in the race against China. Investors such as AndreeSsen Horowitz and Y Combinator have guided some rejection against Sb 1047And in recent months the Trump administration has been Repeatedly threatened To block states to completely approve artificial intelligence regulation.
One of the most common topics against AI security invoices is that states should leave the question to federal governments. The head of the Ai politics of Andreessen Horowitz, Matt Perault, and the Chief Legal Officer, Jai Ramaswamy, have published a Blog posts Last week claiming that many of the artificial intelligence invoices of today’s state risk violating the clause of the Constitution, which limits state governments from approving laws that go beyond their boundaries and compromise interstatal trade.
Techcrunch event
San Francisco
|
27-29 October 2025
However, the anthropic co-founder Jack Clark supports a Placed on x That the technological industry will build powerful artificial intelligence systems in the coming years and will not see the time that the Federal Government acts.
“We have long said WOW we prefer a federal standard,” said Clark. “But in the absence of this it creates a solid project for the governance of the AI who cannot be ignored.”
Chris Lehane, chief affairs officer of Openi, feels like a letters To the Governor Newsom in August claiming that he should not have approved any regulation that would have pushed the Oouth startups outside California, although the letter did not mention SB 53 by name.
Miles Brundage said in a form of political research by Openi send On X that letter from Lehane was “daughter with misleading garbage on SB 53 and general politics”. Considerably, SB 53 aims to regulate the large artificial intelligence companies in the world, in particular those that have generated a gross performance of over 500 million dollars.
Despite the criticisms, political experts say that SB 53 is a more modest approach than the previous AI security invoices. Dean Ball, a senior member at the Foundation for American innovation and said the White House at Policy Advice, in an August Blog posts Who believes that SB 53 has a good chance of becoming law. Ball, who criticized SB 1047, said that the designers of SB 53 have “shown respect for technical reality”, as well as a “measure of legislative moderation”.
Senator Wiener previously He said that SB 53 was heavy influenced by a Political panel expert Governor Newsom Conied-Co-Co-guidido by the main researcher and co-founder of Stanford of World Labs, Fei-Fii Li-to advice on California on how to adjust the IA.
Most artificial intelligence workshops already have a version of the internal security policy that SB 53 require. Openi, Google Deepmind and Anthropic regularly publish security relationships for their models. However, these companies are not only for anyone but they are, so sometimes they Fall back They Autoially safety commitments. SB 53 aims to establish these requirements as a state law, with financial repercussions if an artificial intelligence laboratory does not respect.
At the beginning of September, California legislators Well SB 53 to remove a section of the account that would have requested developers of artificial intelligence models to be controlled by third parties. Technological companies have previously stopped these types of third -party audits in other political battles of artificial intelligence, claiming to be excessively onerous.