news

California AI bill passed, Fei-Fei Li and others strongly opposed, six key questions and answers explain everything

2024-08-17

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

(Image source: AI-generated image)

On August 16, Beijing time, the controversial California Frontier AI Model Safety Innovation Act (hereinafter referred to as the California AI Safety Bill SB 1047) finally passed the review of the California House of Representatives Appropriations Committee after significantly weakening relevant provisions.

California Senator Wiener's team told TechCrunch that the bill added several amendments proposed by AI company Anthropic and other opponents, and was ultimately passed by the California Appropriations Committee, taking an important step toward becoming law with several key changes.

“We have accepted a very reasonable set of amendments proposed, and I believe we have addressed the core concerns expressed by Anthropic and many others in the industry," Senator Wiener said in a statement. "These amendments build on the significant changes I previously made to SB 1047 to accommodate the unique needs of the open source community, which is a critical source of innovation.”

SB 1047 still aims to prevent large-scale AI systems from causing mass deaths or cybersecurity incidents that cause losses of more than $500 million by holding developers accountable. However, the bill now gives the California government less power to hold AI labs accountable. In the dilemma of regulatory laws at the national level, this has become an important step for the United States in AI regulation.

However, some people in the AI ​​industry, such as Fei-Fei Li and LeCun Yang, believe that this will eventually damage California and even the United States' leading position in the field of AI. Hundreds of entrepreneurs signed the letter of opposition written by the incubator YC, and more than 100 academics in California also wrote articles in opposition. Venture capital firm a16z has set up a website to list the six sins of the bill. Currently, many AI bills are being discussed across the United States, but California's Frontier Artificial Intelligence Model Safety Innovation Act has become one of the most controversial bills.

For domestic companies, the bill requires cloud service providers to collect customer information, IP addresses, payment methods, etc., which facilitates the traceability work of regulatory authorities. If the subsequent regulation is further tightened, this may become a regulatory risk faced by some Chinese AI companies that use overseas computing power for training.

Now that the California AI bill has been passed, what impact will it have on the AI ​​industries in China and the United States? Titanium MediaAGIHere are six questions and answers to explain the bill and some of the information behind it.

1. What are the new constraints of the California AI Act? What is its main function?

California’s AI bill, SB 1047, seeks to prevent large AI models from being used to cause “serious harm” to humans.

The bill lists examples of “serious harm.” For example, bad actors could use AI models to create weapons that cause mass casualties, or instruct AI models to plan cyberattacks that cause more than $500 million in damages (by comparison,CrowdStrike The outage was estimated to have caused losses of more than $5 billion). The bill requires developers (i.e., companies that develop models) to implement adequate security protocols to prevent such consequences.

SB 1047’s rules apply only to the world’s largest AI models: those that cost at least $100 million and use 10^26 FLOPS during training — a huge amount of computation, but OpenAI CEO Sam Altman said GPT-4 costs about this much to train. These thresholds can be raised as needed.

Currently, few companies have developed public AI products large enough to meet these requirements, but tech giants such as OpenAI, Google, and Microsoft may soon do so. AI models (essentially large statistical engines that recognize and predict patterns in data) generally become more accurate as they scale, and many expect this trend to continue. Mark Zuckerberg recently said that the next generation of Meta’s Llama will require 10 times more computing power, which would make it subject to SB 1047.

Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety measures before a disaster occurs, a suggestion made by Anthropic.

Instead, the California attorney general can seek injunctive relief to require a company to stop an action it deems dangerous, and can still sue AI developers if an AI model does lead to a catastrophic event.

2. Who will enforce the law? How?

The new California AI bill has announced that the Frontier Model Department (FMD) agency will no longer be established. However, the bill still establishes the Frontier Model Committee - the core of the FMD - and places it within an existing government-run agency. In fact, the committee is now larger, with 9 people instead of 5. The Frontier Model Committee will still set computational thresholds for covered models, issue security guidelines, and issue regulations for auditors.

The biggest adjustment in the revised bill is that the California Attorney General is no longer allowed to sue AI companies for ignoring safety issues before a catastrophic event occurs. The original bill stipulates that as long as auditors find that AI companies have violated the law in their daily operations, the company may be sued. This marks a shift in the regulatory focus of the bill to actual harm, and will also reduce the compliance pressure on companies.

Finally, the bill would provide whistleblower protections for employees if they attempt to disclose information about unsafe AI models to the California Attorney General.

Most US technology companies and AI startups operate in California and will be subject to the bill. Specifically, the bill focuses on "cutting-edge AI models". Only developers whose training model computing costs exceed $100 million will face regulatory requirements. Existing models are not subject to regulation. In the future, Llama 4 and GPT-5 may be the main targets of regulation.

3. What are the penalties for illegal acts? Fines starting from $10 million, and the model may be shut down or deleted.

The bill shows that the chief technology officer of a model development company must submit an annual certification to the FMD, assessing the potential risks of its AI model, the effectiveness of its security protocols, and how the company complies with the description of SB 1047. Similar to the breach notification, if an "AI security incident" occurs, the developer must report it to the FMD within 72 hours of becoming aware of the incident.

If a developer fails to comply with any of these provisions, SB 1047 allows the California Attorney General to bring civil actions against the developer. For a model that cost $100 million to train, the fine could be as high as $10 million for a first violation and $30 million for subsequent violations. As AI models become more expensive, the penalty rates will increase.

If a violation occurs, a company may face fines ranging from US$10 million to US$30 million, and this amount will increase as the cost of model training increases.

In addition, if the model developer and computing cluster operator fail to comply with the above provisions and cause great harm, if it is determined that there is indeed a violation of the law, the court may impose the following penalties:

1. For violations in 2026 and thereafter, entities that violate the law for the first time will be finedcloud computingA fine of up to 10% of the average cost of training in-scope models in the market, and for subsequent violations, a fine of no more than 30% of this cost (based on the $100 million threshold for in-scope models defined in the bill, the fine for the first violation is at least $10 million).

2. The court may also declare an injunction, including but not limited to the modification, complete closure or deletion of the in-scope model and all derivatives controlled by its developers.

However, penalties for modifying, shutting down and deleting in-scope models can only be used when they cause death, serious personal injury, property damage or a serious threat to public safety.

4. Who is opposing the bill? Why?

The bill passed the California Legislature with relative ease despite strong opposition from members of the U.S. Congress, prominent AI researchers, large technology companies, and venture capitalists. These amendments are likely to appease opponents of SB 1047 and provide Governor Newsom with a less controversial bill that he can sign into law without losing support from the AI ​​industry.

Today, more and more technology professionals are opposing the California AI bill.

Anthropic said it is reviewing changes to SB 1047 before making a decision. Not all of the company's proposed amendments were adopted by Senator Wiener.

Well-known venture capital firm a16z listed the “six sins” of the California AI Safety Act on their official website:

  1. The bill will have a chilling effect on AI investment and development in California.

  2. The bill penalizes developers based on unclear outcomes. No test exists.

  3. The bill’s vague definitions, coupled with strict legal liabilities, create enormous uncertainty and economic risks for AI developers and business owners.

  4. The bill could force AI research underground and inadvertently make AI systems less secure.

  5. The bill creates a systemic disadvantage for open source and startup developers, who are at the heart of California’s innovation and small business.

  6. The bill suppresses AI research and innovation in the United States, providing an opportunity for countries such as China to surpass the United States in AI.

Andreessen Horowitz, founding partner of a16z, believes that the California AI bill, although well-intentioned, could weaken the U.S. technology industry due to misguided measures, just as the future of technology is at a critical crossroads. The United States needs leaders to recognize that now is a critical moment to take smart and unified AI regulatory actions.

Stanford University professor Fei-Fei Li wrote an article on the Fortune magazine website explaining in detail the reasons for her opposition, arguing that there are four main problems with this bill: it overly punishes developers and therefore could stifle innovation; the "kill switch" will constrain open source development and destroy the open source community; it will weaken AI research in academia and the public sector and could hinder academia from obtaining more funding; and it does not address the potential dangers of AI advances, such as bias or deep fakes.

In addition, hundreds of startups supported by the YC incubator jointly believe that this bill may have four negative impacts on the industry and developer ecosystem:

  1. The bill should punish those who abuse the tools, not the developers. Developers often have difficulty predicting the possible applications of the model, and the setting of perjury means that developers may be imprisoned for this.

  2. Regulatory thresholds cannot fully capture the dynamics of technological development. Non-California companies will be able to develop AI technology more freely, which may affect innovation in California.

  3. A “Kill Switch” could prohibit the development of open source AI, inhibiting the collaboration and transparency of open source.

  4. The wording of the bill is rather vague and is likely to be interpreted arbitrarily by judges.

Yann LeCun, chief scientist of Meta and winner of the 2018 Turing Award, also opposed the bill. He was concerned that if the risks of the models within the scope were not accurately assessed, the joint liability clause in the bill would indicate that open source platforms might need to bear responsibility.

Andrew Ng, a visiting professor at the Department of Computer Science and the Department of Electrical Engineering at Stanford University, wrote an article pointing out that California's SB-1047 bill will stifle the development of open source big models. The bill should regulate AI applications rather than the big models themselves.

He still believes that the proposed SB 1047 bill in California, USA, is shocking the huge harm caused by open source big models. This bill made a serious and fundamental mistake, and what should be regulated is the generative AI products developed through big models, not the open source big models themselves.

Andrew Ng also believes that SB 1047 requires developers to protect open source models from abuse, modification, and development of illegal derivative AI products. However, how developers should protect and define these behaviors is still very vague, and there is no detailed regulation.

Therefore, Andrew Ng strongly urged everyone to resist SB 1047. He believed that if it was really passed, it would have a devastating impact on the innovation of open source big models, and California would also lose the motivation for AI innovation.

5. Who supports the bill? Why?

In contrast to the opposing voices, there are currently some technology professionals who support the California AI Bill.

Among them, although California Governor Newsom has not yet publicly commented on SB 1047, he has previously expressed his commitment to AI innovation in California.

At the same time, Turing Award winner and "AI Godfather" Geoffrey Hinton, Turing Award winner Yoshua Bengio, Harvard Law School professor Lawrence Lessig and author of the popular AI textbook "Artificial Intelligence: A Modern Approach" and Stuart Russell, professor at the University of California, Berkeley, jointly wrote a letter to the California Legislature to express their "strong support" for the California AI Safety Act.

The four scholars pointed out that they are deeply concerned about the risks of AI, and California's AI Safety Act is the minimum requirement for effectively regulating the technology. "The bill has no licensing system and does not require companies to obtain permission from government agencies before training or deploying models. It relies on companies to self-assess risks and does not require companies to bear strict liability even in the event of a disaster. Relative to the risks we face, this is a very loose legislation. The current laws regulating AI are not as strict as those regulating sandwich shops. It would be a historic mistake to cancel the basic measures of the bill - a mistake that will become more obvious a year later when the next generation of more powerful AI systems are released."

“Forty years ago, when I was training ChatGPT “When we wrote the first versions of the AI ​​algorithms behind tools like AI, no one, myself included, could have predicted that AI would make such great strides. Powerful AI systems hold incredible promise, but the risks are also very real and should be taken very seriously. SB 1047 takes a very sensible approach to balancing these concerns. I remain passionate about AI’s potential to save lives by improving science and medicine, but we must have really strong legislation to address the risks. California is a great place to start for this technology because it’s where it took off,” Hinton said.

The four scholars also emphasized in the letter that whether these risks are unfounded or real, relevant parties must take responsibility for mitigating them. As "experts who know these systems best," they believe that these risks are possible and significant enough that we need to conduct safety testing and common-sense precautions.

California Sen. Scott Wiener, who authored the bill, said SB 1047 is designed to learn from past failures of social media and data privacy policies and protect citizens’ safety before it’s too late.

“Our attitude toward technology is to wait for something bad to happen and then do nothing about it,” Weiner said. “We don’t want to wait for something bad to happen. We want to get ahead of it.”

  • Developers will not go to jail for failing to predict model risks. First, startup developers and academics do not have to worry because the bill does not apply to them. Second, the perjury clause in the bill only takes effect when the developer "intentionally" makes a false statement. An unintentional misjudgment of the model's ability will not trigger the perjury clause (which has been deleted in today's amendment).

  • The bill does not bring about new responsibilities. Under existing law, if a model causes harm, both the model developer and the individual developer may be sued, and it applies to models of all capabilities, and all injured individuals can sue. California's new bill not only limits the scope of regulation, but also limits the right to sue to two entities: the California Attorney General and the Labor Commission.

  • The bill will not stifle innovation in California. The bill applies to all companies doing business in California, and even if they move their headquarters out of California, they should comply with relevant regulations (Zhidongxi Note: California is the world's fifth largest economy in terms of GDP, with a complete technology ecosystem, and it is difficult for technology companies to decouple from California). When California passed data privacy laws and environmental protection laws, many people claimed that this would hinder innovation, but the fact is that California is still leading innovation.

  • Kill switches and security assessment requirements will not hinder the development of open source AI. The bill has now been amended to strengthen the protection of open source AI. The bill's requirements for emergency shutdown of models only apply to models within the control of developers, not uncontrolled open source models. The bill also establishes a new advisory committee to advocate and support safe and reliable open source AI development.

Under the bill, even if a company trained a $100 million model in Texas or France, as long as it does business in California, it will be protected by SB 1047. Wiener said Congress has "barely done any legislation on technology in the last 25 years," so he thinks California should set a precedent here.

Dan Hendrycks, director of the Center for Safety in Artificial Intelligence, said: "This bill is in the long-term interests of California and the U.S. industry, as major safety incidents may be the biggest obstacle to further development."

Several former OpenAI employees also support the bill, including Daniel Kokotajlo, who voluntarily gave up his OpenAI options in exchange for the right to freely criticize OpenAI. He believes that the claim by critics of the bill that AI progress will stagnate is unlikely to happen.

Simon Last, founder of Notion, an AI unicorn valued at tens of billions of dollars, wrote that he supports the bill, believing that California, as a technology center in the United States and even the world, bears important responsibilities when federal AI laws are difficult to pass. He believes that the supervision of models will not only improve their security, but also facilitate AI startups that build products based on basic models, which will reduce the burden on small and medium-sized enterprises.

“The goal of SB 1047 was — and always has been — to improve AI safety while still allowing innovation across the ecosystem,” said Nathan Calvin, senior policy advisor at the Center for AI Safety Action Fund. “The new amendments will support that goal.”

6. What will happen next?

SB 1047 will now go to the California State Assembly for a final vote. If the bill passes the Assembly, it will need to go back to the California State Senate for a vote due to the latest amendments. If the bill passes both, it will go to the desk of California Governor Gavin Newsom, who will make a final decision on whether to sign the bill by the end of August.

Weiner said he has not spoken to Newsom about the bill and does not know where he stands.