Will California pass an AI law that is dividing Silicon Valley?

Governor Gavin Newsom has until September 30th to decide whether to sign legislation that will reach far beyond the state

Governor of California Gavin Newsom: will he sign into law a Bill to regulate artificial intelligence? Photograph: Wang Zhao/AFP via Getty
Governor of California Gavin Newsom: will he sign into law a Bill to regulate artificial intelligence? Photograph: Wang Zhao/AFP via Getty

California’s push to regulate artificial intelligence has riven Silicon Valley, as opponents warn the legal framework could undermine competition and the US’s position as the world leader in the technology.

Having waged a fierce battle to amend or water down the Bill as it passed through California’s legislature, executives at companies including OpenAI and Meta are waiting anxiously to see if Gavin Newsom, the state’s Democratic governor, will sign it into law. He has until September 30th to decide.

California is the heart of the burgeoning AI industry, and with no federal law to regulate the technology across the US – let alone a uniform global standard – the ramifications would extend far beyond the state.

“The rest of the world is certainly paying close attention to what is happening in California and in the US more broadly right now, and the outcome there will most likely have repercussions on other nations’ regulatory efforts,” says Yoshua Bengio, a professor at the University of Montreal and a “godfather” of AI.

READ MORE

Why does California want to regulate AI?

The rapid development of AI tools that can generate humanlike responses to questions has magnified perceived risks around the technology, ranging from legal disputes such as copyright infringement to misinformation and a proliferation of deepfakes. Some even think it could pose a threat to humanity.

US president Joe Biden issued an executive order last year aiming to set national standards for AI safety, but Congress has not made any progress in passing national laws.

Liberal California has often jumped in to regulate on issues where the federal government has lagged behind. AI is in focus with California’s Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, which was put forward by state senator Scott Wiener. Of the various Bills filed in different states, the one in California is the most likely to have a real impact, because the state is at the centre of the technological boom, home to top companies including OpenAI, Anthropic, Meta and Google.

Bengio said: “The big AI companies which have been the most vocal on this issue are currently locked in their race for market share and profit maximisation, which can lead to cutting corners when it comes to safety, and that’s why we need some rules for those leading this race.”

Using fear to sell AI is a bad ideaOpens in new window ]

What does the Bill say?

Wiener has said his Bill “requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: perform basic safety testing on massively powerful AI models”.

The Bill would require developers building large models to assess whether they are “reasonably capable of causing or materially enabling a critical harm”, ranging from malicious use or theft to the creation of a biological weapon. Companies would then be expected to take reasonable safeguards against those identified risks.

Developers would have to build a “kill switch” into any new models over a certain size in case they are misused or go rogue. They would also be obliged to draft a safety report before training a new model and to be more transparent – they would have to “report each artificial intelligence safety incident” to the state’s attorney-general and undertake a third-party audit to ensure compliance every year.

It is directed at models that cost more than $100 million (€90 million) to train, roughly the amount required to train today’s top models. But that is a fast-moving target: Anthropic chief executive Dario Amodei has predicted the next group of cutting-edge models will cost $1 billion to train and $10 billion by 2026.

The Bill would apply to all companies doing business in California, regardless of where they are based, which would in effect cover every company currently capable of developing top AI models, Bengio said.

It would introduce civil penalties of up to 10 per cent of the cost of training a model against developers whose tools cause death, theft or harm to property. It would also create liabilities for companies offering computing resources to train those models and auditing firms, making them responsible for gathering and retaining detailed information about customers’ identities and intentions. Failure to do so could result in fines of up to $10 million.

Who is for the Bill and who is against it?

Wiener and his colleagues say there is strong public support for new AI guardrails. He has also won qualified support from leading AI start-up Anthropic and Elon Musk, as well as actors’ union SAG-AFTRA and two women’s groups. On Monday, 100 employees at top AI companies including OpenAI, xAI and Google DeepMind signed a letter calling on Newsom to sign the Bill.

“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” they wrote.

Critics – including academics such as Stanford AI professor Fei-Fei Li, venture capital firm Andreessen Horowitz and start-up accelerator Y Combinator – argue the Bill would hobble early-stage companies and open-source developers who publicly share the code underlying their models.

Senate Bill SB 1047 would “slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere”, warned OpenAI chief strategy officer Jason Kwon in a letter to Wiener last month. He echoed one of the most common complaints: that the senator was meddling in an area that should be dealt with at the federal level.

We’re focused on very real risks like shutting down the electric grid, or the banking system, or creating a chemical or biological weapon

—  California senator Scott Wiener

Opponents also say it would stifle innovation by piling onerous requirements on to developers and making them accountable for the use of their AI models by bad actors. It legislates for risks that do not yet exist, they add.

Dario Gil, director of research at IBM, said: “Philosophically, anticipating the consequences of how people are going to use your code in software is a very difficult problem. How will people use it, how will you anticipate that somebody will do harm? It’s a great inhibitor. It’s a very slippery slope.”

Dan Hendrycks, director of the Center for AI Safety, which played a critical role in formulating the Bill, said opponents “want governments to give them a blank cheque to build and deploy whatever technologies they want, regardless of risk or harm to society”.

Hendrycks, who is also an adviser to Musk’s xAI, has come under fire from critics who cast the CAIS as a fringe outfit overly concerned about existential risks from AI. Opponents also expressed concerns that CAIS had lobbied for influence over a “Board of Frontier Models” that the Bill would create, staffed with nine directors drawn from industry and academia and tasked with updating regulations around AI models and ensuring compliance.

Wiener rejected those arguments as “a conspiracy theory”.

“The opposition tried to paint anyone supporting the Bill as ‘doomers’,” he said. “They said these were science fiction risks; that we were focused on The Terminator [film]. We’re not, we’re focused on very real risks like shutting down the electric grid, or the banking system, or creating a chemical or biological weapon.”

To make artificial intelligence safer, use it moreOpens in new window ]

How have the Bill’s authors tried to address concerns?

Wiener said he and his team have spent the past 18 months engaging with “anyone that would meet with us” to discuss the Bill, including Li and partners at Andreessen and Y Combinator.

One of their concerns was that requiring a kill switch for open-source models would prevent other developers from modifying or building on them for fear they might be turned off at a moment’s notice. That could be fatal for young companies and academia, which are reliant on cheaper or free-to-access open-source models.

Wiener’s Bill has been amended to exclude open-source models that have been fine-tuned beyond a certain level by third parties. They will also not be required to have a kill switch.

Some of the Bill’s original strictures have also been moderated, including narrowing the scope for civil penalties and limiting the number of models covered by the new rules.

Will the Bill become law?

SB 1047 easily passed the state’s legislature. Now Newsom has to decide whether to sign the Bill, allow it to become law without his signature or veto it. If he does veto, California’s legislature could override that with a two-thirds-majority vote. But, according to a spokesperson for Wiener, there is virtually no chance of that happening. The last time a California governor’s veto was overridden was in 1980.

The governor is in a tough spot, given the importance of the tech industry to his state. But letting AI grow unchecked could be even more problematic.

Wiener said: “I would love for this to be federal legislation: if Congress were to act in this space and pass a strong AI safety Bill I’d be happy to pack up and go home. But the sad reality is that while Congress has been very, very successful on healthcare, infrastructure and climate, it’s really struggled with technology regulation ... Until Congress acts, California has an obligation to lead because we are the heartland of the tech industry.” – Copyright The Financial Times Limited 2024