Anthropic endorses California’s AI security invoice, SB 53


On Monday, Anthropic announced an official endorsement of SB 53, a California invoice from state Senator Scott Wiener that might impose first-in-the-nation transparency necessities on the world’s largest AI mannequin builders. Anthropic’s endorsement marks a uncommon and main win for SB 53, at a time when main tech teams like CTA and Chamber for Progress are lobbying in opposition to the invoice.

“Whereas we imagine that frontier AI security is finest addressed on the federal stage as an alternative of a patchwork of state rules, highly effective AI developments gained’t look ahead to consensus in Washington.” stated Anthropic in a weblog publish. “The query isn’t whether or not we’d like AI governance—it’s whether or not we’ll develop it thoughtfully at the moment or reactively tomorrow. SB 53 affords a strong path towards the previous.”

If handed, SB 53 would require frontier AI mannequin builders like OpenAI, Anthropic, Google, and xAI to develop security frameworks, in addition to launch public security and safety experiences earlier than deploying highly effective AI fashions. The invoice would additionally set up whistleblower protections to staff that come ahead with security issues.

Senator Wiener’s invoice particularly focuses on limiting AI fashions from contributing to “catastrophic dangers,” which the invoice defines because the loss of life of at the least 50 individuals or greater than a billion {dollars} in damages. SB 53 focuses on the acute aspect of AI threat — limiting AI fashions from getting used to offer expert-level help within the creation of organic weapons, or being utilized in cyberattacks — relatively than extra near-term issues like AI deepfakes or sycophancy.

California’s Senate authorised a previous model of SB 53, however nonetheless wants to carry a remaining vote on the invoice earlier than it could actually advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the invoice to date, though he vetoed Senator Weiner’s final AI security invoice, SB 1047.

Payments regulating frontier AI mannequin builders have confronted vital pushback from each Silicon Valley and the Trump administration, which each argue that such efforts may restrict America’s innovation within the race in opposition to China. Traders like Andreessen Horowitz and Y Combinator led among the pushback in opposition to SB 1047, and in current months, the Trump administration has repeatedly threatened to dam states from passing AI regulation altogether.

Probably the most widespread arguments in opposition to AI security payments are that states ought to go away the matter as much as federal governments. Andreessen Horowitz’s Head of AI Coverage, Matt Perault, and Chief Authorized Officer, Jai Ramaswamy, revealed a blog post final week arguing that a lot of at the moment’s state AI payments threat violating the Structure’s Commerce Clause — which limits state governments from passing legal guidelines that transcend their borders and impair interstate commerce.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Nevertheless, Anthropic co-founder Jack Clark argues in a post on X that the tech trade will construct highly effective AI programs within the coming years, and might’t look ahead to the federal authorities to behave.

“We’ve lengthy stated we would like a federal commonplace,” stated Clark. “However within the absence of that this creates a strong blueprint for AI governance that can’t be ignored.”

OpenAI’s Chief International Affairs Officer, Chris Lehane, despatched a letter to Governor Newsom in August arguing that he mustn’t go any AI regulation that might push startups out of California — though the letter didn’t point out SB 53 by title.

OpenAI’s former Head of Coverage Analysis, Miles Brundage, stated in a post on X that Lehane’s letter was “crammed with deceptive rubbish about SB 53 and AI coverage usually.” Notably, SB 53 goals to solely regulate the world’s largest AI corporations — notably ones that generated a gross income of greater than $500 million.

Regardless of the criticism, coverage specialists say SB 53 is a extra modest strategy than earlier AI security payments. Dean Ball, a Senior Fellow on the Basis for American Innovation and former White Home AI coverage advisor, stated in an August blog post that he believes SB 53 has a very good likelihood now of turning into regulation. Ball, who criticized SB 1047, stated SB 53’s drafters have “proven respect for technical actuality,” in addition to a “measure of legislative restraint.”

Senator Wiener beforehand stated that SB 53 was closely influenced by an knowledgeable coverage panel Governor Newsom convened — co-led by main Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on how one can regulate AI.

Most AI labs have already got some model of the interior security coverage that SB 53 requires. OpenAI, Google DeepMind, and Anthropic commonly publish security experiences for his or her fashions. Nevertheless, these corporations usually are not certain by anybody however themselves achieve this, and typically they fall behind their self-imposed security commitments. SB 53 goals to set these necessities as state regulation, with monetary repercussions if an AI lab fails to conform.

Earlier in September, California lawmakers amended SB 53 to take away a piece of the invoice which might have required AI mannequin builders to be audited by third events. Tech corporations have fought a majority of these third social gathering audits in different AI coverage battles beforehand, arguing that they’re overly burdensome.



Source link

Leave a Comment