SB 1047: AI Safety Regulation Gets Real, Part II
Taking a closer look at potential chilling effects
In the first post in this series, I described why the fundamental objections to California’s SB 1047 bill to regulate frontier AI model developers weren’t convincing. Drilling down further, there are two main criticisms of SB 1047’s regulatory mandates: It will damage innovation by the big frontier AI model developers it targets, and it will have unintended spillover impacts on other AI startups and open-weights models. Neither is compelling.
Regarding the first issue, SB 1047 does leave some uncertainty about what constitutes compliance with its requirement to follow “all covered guidance” in existing standards, frameworks from organizations such as the National Institute of Standards and Technology (NIST), and industry best practices. AI safety is an evolving area and an iterative process. Even when model developers are doing their best, there aren’t bright lines. That’s where the flexibility of standards like “reasonable care,” which is what SB 1047 requires, come into play. Model developers don’t have to guarantee safety or take every conceivable precaution; they must take the care one would expect of a normal organization in their situation facing identified risks of at least $500 million in harms.
While it’s possible to imagine scenarios where over-aggressive courts or regulators interpret the bill’s language too strictly, that must be weighed against the dangers of under-regulation without AI safety legislation. On its face, SB 1047 doesn’t impose obligations beyond the capability of the major AI labs, which are already spending billions of dollars annually on model development and have significant investments in alignment.
And to the oft-made claim that SB 1047 will put American AI developers at a permanent disadvantage relative to their competitors in China, have you ever looked at the state of Chinese AI regulation? China already has three AI governance laws on the books, and its regulators already require all large language model developers to pass extensive testing before deployment. A decade ago, China strategically allowed its social media platforms to grow absent regulation as a means of catching up to the West. Today, though, the Chinese Communist Party has become more skeptical of unaccountable tech platforms. And generative AI’s potential to overwhelm the state’s controls on content and public opinion are deeply frightening to the Chinese leadership, as are the prospects of AI safety failures undermining “social stability.” Competition with China is a legitimate concern, but SB 1047 is more likely to be a model for Chinese regulators than an invitation to surpass the US.
Regarding spillover effects, the $100 million training threshold and $10 million modification threshold for open weights are designed to exclude smaller AI firms from any of the bill’s requirements. I’ve seen many comments opposing SB 1047 that ignore this entirely. As with everything else in the bill, one can quibble with the details, but $100 million is real money, even in today’s Silicon Valley.
The more serious concern is that SB 1047 would chill development of open source frontier models. There are those who see this as a good thing, because open models are too dangerous. I don’t. I think Meta, among others, makes strong arguments that providing access to model weights will be valuable in the AI context as open source has been for other forms of software. It will limit concentration of power in a few major AI labs, and advance both technology innovation and AI safety by broadening the base of developers. The fact that Meta isn’t doing this out of altruism doesn’t make them wrong. And Meta is far from the only major lab around the world pushing open weight models. We should look to limit the AI safety risks of open models, but prohibiting them won’t stop the bad guys from gaining access.
SB 1047 requires frontier model developers to allow for a “full shutdown” in catastrophic cases. The original version of that language was incompatible with open models, because developers don’t control what happens when someone else takes the code and runs it themselves. The current bill applies only to “all computers and storage devices within custody, control, or possession” of the developer. In other words, Meta will be responsible for safety best practices for a LLAMA 4 model that meets the compute and training costs requirements, but not for versions run by others. And those users of the open model will only themselves have obligations under the bill when they spend $10 million to modify it, and effectively create a new model that is no longer as safe. Those seem like reasonable conditions.
SB 1047, as it stands today, is very different than the misbegotten bill that I envisioned when I first read about it. As the open source language indicates, it is not simply a vehicle for billionaire-funded “AI doomers” promoting extreme concerns about existential AI risk. While they are out in support, it would be unfair to lump all those endorsing the bill, including its author, as existential AI risk scaremongers. SB 1047 is indeed an AI safety law, but it’s considerably more modest than the proposals from the doomer community for mandatory licensure of AI models, comprehensive regulation of AI data centers, or bans on open source frontier models.
The bill is also not the work of politicians putting mandates on paper that ignore technical realities. The support from prominent AI experts such as Yoshua Bengio and Geoffrey Hinton should belie that assumption, and the extensive interchange with both supporters and opponents in the AI community shows in the evolution of the bill. One can take the view that any safety regulation of models is misguided, but SB 1047 doesn’t presume that models can be perfectly safe, or that model regulation is a substitute for safety regulation of systems.
The debate about SB 1047 has now transcended the specifics of the bill. Many of the critics are thoughtful and independent voices who raise legitimate fears about excessive or misguided regulation. Yet thanks to the vehemence and widespread publicity of the opposition, SB 1047 may determine whether any significant AI safety mandates can be imposed on the major AI labs. If it fails, it’s hard to see another measure gaining sufficient support. That worries me more than the scary warnings about chilling effects.
The big AI labs are spending tens of billions of dollars a year, and throwing everything they possibly have at advancing the state of frontier models, because they see this as an existential technology race for their businesses. They won’t be spooked to abandon those efforts by regulation, any more than the big social media platforms abandoned their privacy-invading models in response to aggressive regulatory sanctions in Europe. For the most part, these companies are seriously committed to AI safety, even without regulatory obligations. History has shown, though, that sometimes voluntary efforts are insufficient.