news

Turing giants split again! California AI restriction bill supported by Hinton passed initially, LeCun, Li Feifei and Wu En

2024-08-17

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


New Intelligence Report

Edit: So sleepy

【New Wisdom Introduction】Despite strong opposition from a number of AI giants, technology giants, start-ups and venture capitalists, California's "AI Limitation Bill" was successfully passed in its initial phase.

As we all know, apart from the interpretations in various science fiction movies, AI has not killed anyone in the real world or launched large-scale cyber attacks.

Yet some U.S. lawmakers remain hopeful that adequate safeguards can be put in place before this dystopian future becomes a reality.


Just this week, California’s Frontier Artificial Intelligence Model Safety Innovation Act, SB 1047, took another important step toward becoming law.


Simply put, SB 1047 will prevent AI systems from causing large-scale casualties or triggering cybersecurity incidents with losses exceeding $500 million by holding developers accountable.

However, due to strong opposition from academia and industry, California lawmakers made some compromises - adding several amendments suggested by AI startup Anthropic and other opponents.

Compared to the original proposal, the current version reduces California's power to hold AI labs accountable.


Bill address: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

But even so, (almost) no one likes SB 1047.

AI giants such as Yann LeCun, Fei-Fei Li, and Andrew Ng have repeatedly expressed their dissatisfaction with this bill that "stifles open source AI and forces AI innovation to be suspended or even stopped."




Various joint letters are also emerging in an endless stream:

More than 40 researchers from the University of California, University of Southern California, Stanford University and California Institute of Technology have strongly called for the bill not to be passed.



In addition, eight members of Congress representing various districts in California also urged the governor to veto the bill.




LeCun even repeated his previous request for a moratorium on AI research: Please suspend AI legislation for six months!


So the question is, why did the previous article use "almost everyone"?

Because, in addition to LeCun, the other two Turing giants, Yoshua Bengio and Geoffrey Hinton, strongly support the passage of this bill.

Some even feel that the current terms are a bit too lenient.


As senior AI technology and policy researchers, we write to express our strong support for California Senate Bill 1047. SB 1047 outlines the basic requirements for effective regulation of this technology. It does not impose a licensing system that requires companies to obtain permission from government agencies before training or deploying models, it relies on companies to assess risk on their own, and it does not hold companies accountable even in the event of a disaster. It is a relatively light-touch legislation relative to the scale of the risks we face. It would be a historic mistake to undo the bill’s basic measures—one that will become more apparent in a year as the next generation of more powerful AI systems are released.

Today, SB 1047 passed the California Legislature with relative ease despite strong opposition from some U.S. congressmen, prominent AI researchers, large technology companies, and venture capitalists.

Next, SB 1047 will go to the California State Assembly for a final vote. Due to the addition of the latest amendment, the bill will need to be sent back to the California State Senate for a vote after it is passed.

If both votes pass, SB 1047 would then be sent to the governor for a final veto or signing into law.

Which models and companies will be constrained?

According to the requirements of SB 1047, developers or companies that develop models are responsible for preventing their AI models from being used to cause "significant harm."

For example, creating weapons of mass destruction or launching a cyberattack that causes losses of more than $500 million. By the way, CrowdStrike's "Global Windows Blue Screen Incident" caused more than $5 billion in losses.

However, the rules of SB 1047 only apply to AI models that are very large in scale - that is, the training cost is at least $100 million and the floating point operations exceed 10^26 times. (Basically, it is based on the training cost of GPT-4)

It is said that the computing power required for Meta's next generation Llama 4 will increase tenfold, and will therefore also be regulated by SB 1047.

For open source models and their fine-tuned versions, the original developers are responsible, unless the cost is three times that of the original model.

In light of this, it’s no wonder that LeCun’s reaction was so intense.

Additionally, developers must create testing procedures that address risks to AI models and must hire third-party auditors annually to assess their AI safety practices.

For AI products built based on models, corresponding safety protocols need to be established to prevent abuse, including an "emergency stop" button to shut down the entire AI model.

etc……

What does SB 1047 do now?

Today, SB 1047 no longer allows the California Attorney General to sue AI companies for negligent safety measures before a catastrophic event occurs. (Anthropic advice)

Instead, California’s attorney general could seek an injunction requiring a company to stop a practice it deems dangerous, and still be able to sue AI developers if their models do lead to catastrophic events.

SB 1047 no longer establishes the new government agency, the Frontier Modeling Division (FMD), that was originally included in the bill.

However, the core of FMD, the Frontier Model Committee, is still established and placed within the existing Government Operations Agency, and its size is expanded from 5 to 9. The committee will still set calculation thresholds for the models covered, issue safety guidance, and issue regulations for auditors.

SB 1047 also has looser language when it comes to ensuring AI models are secure.

Now, developers only need to provide "reasonable care" to ensure that AI models do not pose a significant risk of disaster, rather than the "reasonable assurance" required previously.

In addition, developers only need to submit a public "statement" outlining their security measures, and no longer need to submit certification of security test results under penalty of perjury.

There is also a separate protection for open source fine-tuned models. If someone spends less than $10 million on fine-tuning a model, they are not considered a developer and the liability remains with the original large developer of the model.

Li Feifei once wrote an article criticizing

The impact of SB 1047 on the AI ​​community can be seen in a column published by “AI Godmother” Fei-Fei Li in Fortune magazine:

“If enacted into law, SB 1047 will harm America’s nascent AI ecosystem, especially those segments that are already at a disadvantage: the public sector, academia, and small tech companies. SB 1047 will unnecessarily penalize developers, stifle the open source community, and restrict academic research, while failing to address real problems.”


First, SB 1047 would overly penalize developers and stifle innovation.

In the event of an AI model being misused, SB 1047 places the blame on the responsible party and the original developer of the model. It is impossible for every AI developer, especially budding programmers and entrepreneurs, to predict every possible use of their model. SB 1047 will force developers to take defensive measures - something that should be avoided at all costs.

Second, SB 1047 will constrain open source development.

SB 1047 requires that all models above a certain threshold include an “emergency stop button,” a mechanism that can shut down the program at any time. If developers worry that the programs they download and build will be deleted, they will be more hesitant to write code or collaborate. This emergency stop button will have a serious impact on the open source community—not only in AI, but in countless sources of innovation in fields including GPS, MRI, and the Internet itself.

Third, SB 1047 would undermine AI research in the public sector and academia.

Open source development is important in the private sector, but even more so in academia, which cannot make progress without collaboration and access to model data. If researchers cannot access appropriate models and data, how can they train the next generation of AI leaders? The emergency stop button will further weaken academia, which is already at a disadvantage in data and computing. SB 1047 will deal a fatal blow to academic AI at a time when we should be investing more in public sector AI.

Most concerningly, SB 1047 does not address the potential risks of AI advances, including bias and deepfakes. Instead, it sets a very arbitrary threshold — reaching a certain amount of computing power or a model that costs $100 million to train. Rather than providing a guarantee, this measure will limit innovation in all fields, including academia.

Li Feifei said that she is not against AI governance. Legislation is essential for the safe and effective development of AI. However, AI policy must support open source development, formulate unified and reasonable rules, and build confidence for consumers.

Clearly, SB 1047 does not meet these standards.

References:

https://techcrunch.com/2024/08/15/california-weakens-bill-to-prevent-ai-disasters-before-final-vote-taking-advice-from-anthropic/

https://techcrunch.com/2024/08/15/california-ai-bill-sb-1047-aims-to-prevent-ai-disasters-but-silicon-valley-warns-it-will-cause-one/

https://fortune.com/2024/08/06/godmother-of-ai-says-californias-ai-bill-will-harm-us-ecosystem-tech-politics/