California Governor Gavin Newsom Vetoes AI Safety Bill Amid Industry Concerns

California Governor Gavin Newsom vetoed the highly debated AI safety bill, SB 1047, on Sunday after facing opposition from the tech industry. The bill, which aimed to increase the regulation of artificial intelligence (AI) models, sparked significant pushback from tech leaders concerned about its impact on innovation.

In his veto statement, Newsom expressed concerns that the bill, which would have required AI companies to stress test large models before releasing them, might push businesses out of California and stifle progress. He pointed out that California is home to 32 of the world’s leading AI companies, and stringent regulations could hinder their operations.

“The bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said.

What the AI Bill Proposed

The bill, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), primarily targeted companies developing generative AI systems capable of creating fully formed text, images, or audio based on prompts. These companies would have been required to install “kill switches” in their AI models and publish plans detailing how they would mitigate extreme risks, particularly for models that cost over $100 million. These risks include potential misuse of AI for malicious purposes, unintended consequences of AI decisions, and the potential for AI to perpetuate biases and discrimination.

SB 1047, authored by Democratic State Senator Scott Wiener of San Francisco, included several protections, such as emergency shut-down capabilities and using models only for their stated purpose. The bill also offered whistleblower protections for employees disclosing issues with AI systems.

Newsom’s Plan for AI Regulation

Despite the veto, Newsom acknowledged the importance of regulating AI. He noted that while SB 1047 may not be the right path, safety protocols must still be developed. He revealed plans to work with leading U.S. AI Safety Institute experts to develop new, more targeted regulations that balance safety with innovation. He instructed state agencies to expand their assessments of potential catastrophic risks linked to AI use, with the goal of creating a comprehensive regulatory framework for AI in California.

“We cannot afford to wait for a major catastrophe to occur before taking action to protect the public … Safety protocols must be adopted,” Newsom said.

Industry Response and Opposition

Newsom’s veto was met with praise from tech industry leaders. Venture capitalist Marc Andreessen and Meta’s chief AI scientist Yann Lecun supported the decision. Andreessen thanked Newsom for “siding with California dynamism, economic growth, and freedom to compute.”

Meanwhile, Tesla CEO Elon Musk took a more measured stance, previously tweeting his tentative support for the bill, calling it a “tough call.” However, he ultimately agreed that some form of regulation might be necessary.

Not all in the tech community were in favor of the bill’s provisions. The Mozilla Foundation, a non-profit supporting open-source technology, opposed SB 1047, arguing that it could negatively impact the open-source community by consolidating power within a few large tech companies. This diversity of opinions within the tech community underscores the complexity and importance of the issue at hand.

Hollywood and Critics’ Concerns

The bill received support from a group of Hollywood artists who saw it as a crucial step toward regulating an industry with enormous potential for both positive and negative impacts. Actor Mark Ruffalo was among the advocates, stating that while the bill might not be perfect, it would set the groundwork for responsible AI development.

On the other hand, critics of Newsom’s veto, like Daniel Colson from the AI Policy Institution, argued that this decision leaves the public vulnerable to the growing risks posed by AI without any significant regulatory oversight. This highlights the potential dangers of unregulated AI and the urgent need for effective regulation.

“Companies aiming to create compelling technology face no binding restrictions from U.S. policymakers,” Senator Wiener remarked following the veto, warning that the lack of oversight exposes the public to potential dangers posed by advanced AI systems.

The Debate Continues

While Newsom’s veto ends the current iteration of SB 1047, the debate over AI safety and regulation is far from over. As AI technology continues to grow, balancing innovation with public safety will remain a central issue in California and beyond.