The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US.
The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”
Senator Scott Wiener, the bill’s main author, said SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing: test their large models for catastrophic safety risk. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”
SB 1047 — our AI safety bill — just passed off the Assembly floor. I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety.
— Senator Scott Wiener (@Scott_Wiener) August 28, 2024
AI has so much promise to make the world a better place. It’s exciting.
Thank you, colleagues.
Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce — have argued that it’s overly focused on catastrophic harms and could unduly harm small, open-source AI developers. The bill was amended in response, replacing potential criminal penalties with civil ones, narrowing enforcement powers granted to California’s attorney general, and adjusting requirements to join a “Board of Frontier Models” created by the bill.
After the State Senate votes on the amended bill — a vote that’s expected to pass — the AI safety bill will head to Governor Gavin Newsom, who will have until the end of September to decide its fate, according to The New York Times.
Anthropic declined to comment beyond pointing to a letter sent by Anthropic CEO Dario Amodei to Governor Newsom last week. OpenAI didn’t immediately respond to a request for comment.
Posted from: this blog via Microsoft Power Automate.
0 Comments