Legislation would have required tech firms to test artificial intelligence models before release
California Governor Gavin Newsom has rejected a groundbreaking bill on artificial intelligence that aimed to implement the first safety measures for the industry in the US. The bill, known as California Senate Bill 104, or SB 1047, was designed to mitigate potential risks associated with AI.
The proposed regulation would have mandated tech companies with powerful AI models to undergo safety testing prior to public release, and publicly disclose the models’ safety protocols. This was intended to prevent manipulation of the models that could result in harm, such as hacking into strategically important infrastructure.
In a statement accompanying the veto on Sunday, the governor acknowledged the proposal’s “well-intentioned” nature, but argued that it inappropriately focused on the “most expensive and large-scale” AI models, while “smaller, specialized models” might pose greater risks. Newsom also contended that the bill failed to consider the context of an AI system’s deployment or whether it involved critical decision-making or the use of sensitive data.
“Instead, the bill applies stringent standards to even the most basic functions… I do not believe this is the best approach to protecting the public from real threats posed by the technology,” the governor declared. Newsom emphasized his agreement on the need for industry regulation, but called for more “informed” initiatives based on “empirical trajectory analysis of Al systems and capabilities.”
“Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself… Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right,” he concluded.
As California governor, Newsom is recognized as a significant player in the emerging AI regulation landscape. According to his office’s figures, the state is home to 32 of the world’s “50 leading AI companies.”
The bill’s author, state Senator Scott Weiner, deemed the veto “a setback” for those who “believe in oversight of massive corporations that are making critical decisions” impacting public safety. He vowed to continue working on the legislation.
The bill had elicited mixed responses from tech firms, researchers, and lawmakers. While some saw it as a stepping stone towards nationwide regulations on the industry, others argued that it could hinder AI development. Former US House Speaker Nancy Pelosi labeled the proposal “well-intentioned but ill informed.”
Meanwhile, a substantial number of employees at several leading AI firms, including OpenAI, Anthropic and Google’s DeepMind, voiced their support for the bill, as it included whistleblower protections for those who expose risks in the AI models their companies are developing.