news

The US AI bill has been greatly weakened! The AI ​​community is in an uproar, Fei-Fei Li publicly condemns it, and domestic AI companies are all concerned about this

2024-08-16

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


Smart Things
Author: Chen Junda
Edit Panken

Zhidongxi reported on August 16 that just now, the controversial California Frontier AI Model Safety Innovation Act (hereinafter referred to as the California AI Safety Act) was passed.Significantly weakenedAfter that, it successfully passed the review of the California House of Representatives Appropriations Committee.Regulatory legislation at the national level is difficult to implementThis has become an important step for the United States in AI regulation. However, some people in the technology industry believe that thisUltimately, this will undermine California’s and the United States’ leading position in AI.

The biggest adjustment in the amended bill is:No longer allowedThe California Attorney General sued AI companies for ignoring safety issues before a catastrophic incident occurred. The original bill stipulated that if auditors found that AI companies had violated the law in their daily operations, the companies could be sued. This marks the end of the bill.Regulatory focus shifts to actual harm, and willReduce the compliance pressure on enterprises.

Most American technology companies and AI startups operate in California.will be subject to this Act.Specifically, this bill focuses on "Cutting-edge AI models"Only developers whose training model computing costs exceed $100 million will face regulatory requirements, and existing models are not subject to regulation. In the future,Llama 4 and GPT-5 may be its main regulatory targets.

For domestic enterprisesIf only minor adjustments are made on the basis of open source, it is highly unlikely that they will be included in the scope of regulation. However, the bill requires cloud service providers to collect customer information, IP addresses, payment methods and other information, which facilitates the traceability work of regulatory authorities. If subsequent regulations are further tightened, this may becomeSome Chinese AI companies that use overseas computing power for training face regulatory risks.

Enterprises within the scope of supervision shouldTake proactive measuresTo avoid model abuse, you also need to haveEmergency Shutdown ModelIn addition, they will be required to submit a public statement outlining their security practices.Annual inspection” system, developers are required to hire independent auditors to assess compliance every year.

Once illegal behavior occurs, the company may faceUSD 10 million to 30 millionThis amount will increase as the cost of model training increases.

The AI ​​community is split into two camps, engaged in a fierce debate.

2018 Turing Award WinnersGeoffrey Hinton(Geoffrey Hinton) andJoshua BengioYoshua Bengio expressed support and believed that the California AI Safety Act is an effective way to regulate AI technology.Minimum requirements”。

Hundreds of entrepreneurs signedYCMore than 100 academics in California also wrote letters of opposition.Venture capital firm a16zA Stanford University professor has created a website to list the six sins of the bill.Fei-Fei LiThe main point of the tweet is that this bill will hinder California's AI investment and development.Significant harm to the open source ecosystem, and has not addressed the substantial risks posed by AI.

However, California State Senator Scott Wiener, the lead author of the California AI regulation bill, refuted some of the criticisms of the bill, saying that a16z and YCDeliberate distortion of factsThe bill is an optimization within the existing legal framework, which improves the targeted nature of supervision. It will not send a large number of developers to jail, nor will it harm innovation and open source AI.

Zhidongxi sorted out a lot of information and summarized the specific content of this bill and related discussions into the following five questions and answers:

1. What is the scope of application? Most existing models are not applicable, and Llama 4 and GPT-5 may be included in the scope of regulation.

To achieve effective supervision, the bill originally planned to create a "Frontier Model Department (FMD)" to enforce relevant regulations. After today's revision, this department has been transformed into a 9-member committee under the California government's management bureau, including representatives from the AI ​​industry, open source community and academia, appointed by the California governor and legislature. The Frontier Model Committee will still set computational thresholds for the covered models, issue safety guidelines and publish regulations for auditors.

As one of the centers of AI development in the United States and even the world, California has attracted a large number of AI companies. More than 30 of the top 50 AI companies in Forbes magazine in 2024 are operating in California. According to this bill, they all need to comply with the relevant provisions of the California AI Safety Act. However, the actual scope of application of this bill will be significantly limited by the model capabilities.Perhaps only the top players in the large-scale model field will be regulated by this bill in the next few years.


▲Partial screenshot of California AI Regulatory Bill (Source: California Senate official website)

This bill will evaluate whether a model should be included in the scope of regulation based on the computing power and computing cost used in model training, that is, whether it should be listed as "Covered modelThe relevant provisions can be summarized into the following two points:

1. Before January 1, 2027, use10^26and more integer or floating point computing power to train the model (based on the market average computing power price of $100 million) or use3*10^25The model obtained by fine-tuning the in-scope model with computing power will be listed as the in-scope model.

2. After January 1, 2027, the above standards will be implemented by CaliforniaFrontier Model Regulatory AuthorityIf the department modifies the computing power threshold, the new standard will prevail; if the department decides that no modification is needed, the original standard will continue to be used. In addition, after 2026, the $100 million in the original standard should beAdjust accordingly to take inflation into account.

So which of the models that have been released so far will be included in this range? There are many ways to calculate the computing power used in model training. OpenAI once roughly estimated in a paper:

FLOPS=6ND

(where N is the number of parameters and D is the number of tokens)

Using this calculation method and the data from the open source model, we can roughly estimate the level of computing power currently used by the models:

Llama 3 70B

It has 70 billion parameters and uses 15T tokens of training data during training, so the computing power used is roughly:

6*70B*15T= 6.3*10^24 FLOPS

Llama 3.1 405B

It has 405 billion parameters and uses 15.6T tokens of training data during training, so the computing power used is roughly:

6*405B*15.6T=3.79*10^25 FLOPS


▲Llama 3.1 405B related information (Source: Hugging Face)

This calculation method is relatively rough and may be quite different from the actual situation, but it is still of reference significance.Llama 3.1 405B, the king of open source models, is not currently listed as a model within the scope of the bill., a model will only be listed as in-scope if it uses a full 2.6 times the computing power of Llama 3.1 405B.

This data can also be supported by the statement made by California State Senator Wiseman in an open letter responding to the controversy over the bill. He wrote in the letter, "In the currently publicly released model,No model has reached the 10^26 FLOP computing power threshold”。

For a long time to come, most small and medium-sized model development companies and independent developers may not need to worry about the regulation of this bill. According to the computing power threshold and funding threshold of the bill, its main regulatory targets are such asOpenAI、Meta、GoogleSuch major players in the AI ​​field andLeading startups that have received a lot of investment.

According to OpenAI CEO Sam Altman's public statement, the training cost of GPT-4 is approximately $100 million, and it is unclear whether this cost includes input other than computing power. If GPT-5 or other subsequent models of OpenAI are released, they may be subject to the regulation of the bill. Meta CEO Mark Zuckerburg once estimated that the computing power required for Llama 4 is about 10 times that of Llama 3.1, which meansLlama 4 should be one of the few open source models that will be listed as in-scope models in the future.

2. What are the responsibilities of enterprises? Developers need to "perform annual inspections", proactively prevent abuse and equip "kill switches"

According to the relevant provisions of this Act,Developers of range modelsThe main responsibilities include:

1. ImplementationManagement, Technology and PhysicsThree aspects of security protection to prevent the scope of the model and the scope of the model derivatives controlled by the developerUnauthorized access, misuse, or post-training modification

2、Achieve the ability to fully close the model in a timely manner.

3. The original bill stipulates that model developers need to sign an independent security agreement to ensureWe will not develop models and derivatives that have the risk of causing serious harm (such as a cyber attack causing losses of more than US$500 million, resulting in death or serious personal injury, etc.).If there is false information in the agreement, it will be punished for perjury.

The revised billThe legal effect of the agreement has been significantly weakened. Currently, model developers only need to submit a public statement outlining their security practices.However, the provision no longer imposes any criminal liability and only prosecutes the relevant entities after catastrophic harm has occurred.

4. If the in-scope models and their derivatives may cause or assist in causing serious harm, the developer shall not use them for commercial or public purposes on its own, nor shall it provide them to others for commercial or public purposes.

5. Starting from January 1, 2028, model developers within the scope shouldHire 1 third-party auditor every year, to assess in detail and objectively the developer’s compliance with the requirements of the Act, report any violations and provide suggestions for corrections.

6、The revised bill adds protection for fine-tuning of open source models.The bill clearly states that if the developer spends money on fine-tuning the model within the scopeLess than $10 million, they will not be identified as the developer of the fine-tuned model, and the responsibility still lies with the developer of the original model.

In addition, the bill provides for theObligations of cloud service providersWhen providing customers with computing power sufficient to train models in scope, cloud service providers should be prepared to:

1. Obtain the customer’s basic identity information and business purpose, includingCustomer identity, IP address, payment method and sourceEtc. Relevant compliance operations should be recorded and the records should be retained for 7 years.

2. Assess whether the customer actually plans to use this computing power to train models in scope.

3、Ability to shut down computing resources used by users when training or operating models.

Developers and cloud service providers of in-scope models should also provideTransparent, unified and open price listand ensure thatNo discrimination or anti-competitive behavior.butPublic entities, academic institutions and non-commercial researchersYou can enjoy free or discounted usage rights. Some cloud service providers will use price discounts to attract or support specific AI development companies. This behavior may be exposed after the bill officially takes effect.

The bill stipulates that when a developer learns of a security incident in a model,Relevant incidents should be reported to the regulatory authorities within 72 hoursIn addition, developers should also disclose the information after a model or its derivatives are used for commercial or public purposes.Within 30 days, submit a compliance statement for this model. In addition,Whistleblowers will be protected by law, companies or organizations may not discourage or retaliate against employees who disclose misconduct.

3. What are the penalties for illegal acts? Fines starting from $10 million, and the model may be shut down or deleted.

If model developers and computing cluster operators fail to comply with the above provisions and cause great harm, the California Attorney General and Labor Commission will have the right to sue the relevant entities. If it is determined that there is indeed a violation of the law, the court may impose the following penalties:

1. For violations in 2026 and thereafter, entities will be fined up to 10% of the average cost of training models in the cloud computing market for the first time, and up to 30% of that cost for subsequent violations (based on the $100 million threshold for in-scope models defined in the bill).The fine for a first offense is at least $10 million)。

2. The court may also declare an injunction.This includes but is not limited to modification, complete closure or deletion of the in-scope model and all derivatives controlled by its developers.

However, the penalty for modifying, closing or deleting a model in scope is only imposed if it causesDeath of another, serious personal injury, property damage, or serious threat to public safetyIt can only be used when

4. Who opposes the bill? a16z and YC take the lead in launching a propaganda offensive, and Fei-Fei Li and LeCun Yang tweeted to question it

Silicon Valley's tech giants, a large number of startups and investment institutions have expressed strong doubts and dissatisfaction with the California AI Safety Act.The loudest voice among them comes from a16z, a well-known Silicon Valley venture capital firm.a16z has a rich investment portfolio in the field of AI, including mature leading AI companies such as OpenAI and Mistral AI, as well as up-and-coming companies such as World Labs, newly established by Stanford University professor Fei-Fei Li. These companies are large in scale and have raised large amounts of funding.The models they develop and fine-tune may be included in the regulatory scope.


▲Some of a16z’s objections (Source: stopsb1047.com)

a16z has paid to build a website that provides visitors with a template for objection letters, which can be sent directly by simply editing them on the website. a16z also listed the "six sins" of the California AI Safety Bill in their eyes on this website:

1. The bill will have a chilling effect on AI investment and development in California.

2. The bill penalizes developers based on unclear outcomes. No test exists yet.

3. The bill’s vague definitions and strict legal responsibilities bring huge uncertainty and economic risks to AI developers and business owners.

4. The bill could force AI research underground, inadvertently reducing the security of AI systems.

5. The bill creates a systemic disadvantage for open source and startup developers, who are at the heart of California’s innovation and small businesses.

6. The bill inhibits AI research and innovation in the United States, providing opportunities for countries such as China to surpass the United States in AI.

The founders of several a16z-invested companies also expressed opposition to the California AI Safety Act. Fei-Fei Li wrote an article on the Fortune magazine website to explain in detail the reasons for her opposition, arguing that there are four main problems with the bill:

1. Overly punishing developers, thereby potentially stifling innovation.

2. The "kill switch" will restrict open source development and destroy the open source community.

3. Weakening AI research in academia and the public sector may also hinder academia from obtaining more funding.

4. It does not address the potential harms of AI advancement, such as bias or deep fakes.


▲ Fei-Fei Li opposes California AI safety bill (Source: X platform)

However, many netizens in the comment section did not buy into this statement, and they called on Fei-Fei Li to disclose her interest relationship with a16z.


▲Some netizens questioned Li Feifei's neutrality (Source: X Platform)

As the most famous and influential incubator in the United States, YC is the cradle of many AI startups.Their main location of operations is currently in California.More than half of the 260 startups in the 2024 YC winter batch are related to AI. YC also believes that this bill may have a negative impact on the industry and developer ecosystem:

1、The bill should punish those who abuse the tools, not their developers.Developers often find it difficult to predict the possible applications of a model, and the provision of perjury means that developers may be sent to jail.

2. Regulatory thresholds cannot fully capture the dynamics of technological development. Non-California companies will be able to develop AI technology more freely, which may affect innovation in California.

3. “Kill Switch” (the ability for developers to shut down models) could prohibit the development of open source AI, inhibiting the collaboration and transparency of open source.

4. The wording of the bill is rather vague.It is likely to be interpreted arbitrarily by the judge

YC's statement was jointly supported by hundreds of startups. YC also held several AI regulation-related meetings in California to allow both sides to communicate.

Yann LeCun, Meta's chief scientist and 2018 Turing Award winner, also opposed the bill. He was particularly concerned about theJoint and several liability clause.He worries that open source platforms could be held liable if they fail to accurately assess the risks of models in scope.


Meta, the largest player in the field of open source AI models, wrote in a letter of opposition that this bill forces open source developers to take great legal risks when open source because the requirement to ensure the security of the model is unrealistic. They suggested that California legislators refer to relevant laws in other countries and regions.Such as the requirements for transparency and “red team testing” in the EU AI ActMicrosoft, Google and OpenAI also spoke negatively about the bill.

Anthropic, an AI startup with a unique "Beneficial Corporation structure", is one of the many voices opposing the bill. However, unlike other companies, they did not try to prevent the bill from passing. Instead, they participated in the amendment of the bill and wrote to the legislature to elaborate on their suggestions for improvement. Overall, Anthropic believes thatThe bill should shift from “pre-harm enforcement” to “outcome-based deterrence”They also want stricter regulations that focus on cutting-edge AI safety and avoid duplication with federal requirements.

5. Who supports the bill? Hinton Bengio wrote a letter in support, and the senator accused a16z of spreading rumors

Although the California AI Safety Act has faced controversy since its proposal, many scholars and industry insiders support such regulation. On August 7, when the bill was once again facing a wave of criticism, Turing Award winner and "AI Godfather"Geoffrey Hinton(Geoffrey Hinton), well-known AI scholar, Turing Award winnerJoshua BengioYoshua Bengio, cyberspace law expert, professor at Harvard Law SchoolLawrence LessigLawrence Lessig, author of the popular AI textbook Artificial Intelligence: A Modern Approach and professor at the University of California, BerkeleyStuart Russell(Stuart Russell) wrote a joint letter to the California legislature expressing their "strong support" for the California AI Safety Act.


▲Four well-known scholars wrote a letter in support of the California AI Safety Act (Source: safesecureai)

The four scholars said they are deeply concerned about the risks of AI and that California's AI safety bill is the minimum requirement for effective regulation of the technology. They wrote:

The bill has no licensing system and does not require companies to obtain permission from government agencies before training or deploying models. It relies on companies to self-assess their risks and does not hold companies strictly liable even in the event of a disaster.

Compared to the risks we face,This is a very permissive legislation.The current laws governing AI are less stringent than those governing sandwich shops, and repealing them would be a fundamental step.A historic mistake——One year later, when the next generation of more powerful AI systems is released, this mistake will become more obvious.

They also stressed thatWhether these risks are unfounded or real, all relevant parties need to take responsibility for risk mitigation. And "as a group of experts who know these systems best", they believe that these risks are possible and significant enough that we need to take action on them.Safety Testing and Common-Sense Precautions

The four experts also cited Boeing's recent safety scandal. They believe that various other complex technologies, such as aircraft and medicines, have become very safe and reliable, which is the result of joint efforts by industry and government.When regulators relax rules and allow self-regulation, you have problems like Boeing, which is extremely scary for the public and the industry.

This letter of support received support from many netizens, but some people also believed that Hinton and Bengio are no longer involved in front-line work.Therefore, they have no say on related issues.

Several former OpenAI employees also support the bill, including Daniel Kokotajlo, who voluntarily gave up his OpenAI options in exchange for the right to freely criticize OpenAI. He believes that the claim by critics of the bill that AI progress will stagnate is unlikely to happen.


▲Related remarks by Kokotayolo (Source: X Platform)

Simon Last, founder of Notion, an AI productivity tool unicorn valued at tens of billions of dollars, isIndustryA few of the people who supported California's AI safety billHe wrote an article in the Los Angeles Times, one of the most influential media in California, to support the bill, saying that in the case of the difficulty of federal AI laws,As a technology center in the United States and even the world, California bears important responsibilities.He believes that the supervision of models will not only improve their security, but also facilitate AI startups that build products based on the basic models.This will reduce the burden on small and medium-sized enterprises

In addition, he believes that it is wrong to emphasize that "regulation should focus on the harmful use of AI and the improper behavior of users, rather than the underlying technology." Because the former has been listed as illegal in laws at all levels, and California's new bill provides a targetWhat previous regulation lacked: Prevent potential harm.

After YC and a16z launched their publicity campaign, California State Senator Wiseman, the lead author of the California AI Safety Act, sent a long letter to respond directly to the accusations, saying that many of the two institutions' statements about the California AI Safety Act were inaccurate, and some of the statements were "highly inflammatory distortions of the facts," such as the bill would send developers who failed to accurately estimate model risks to jail, or prohibit open source releases.


▲Weishangao’s response to YC and a16z (Source: safesecureai)

In early 2023, Weishanko began to contact YC, but he said he still had not received detailed feedback from YC on how to improve the bill. And a16z participated late in the legislative process and has not proposed any substantive amendments so far.

In this letter, Weishanko responded to several core concerns of the bill's opponents:

1、Developers will not go to jail for failing to predict model risk.First, startup developers and academics don’t have to worry because the bill doesn’t apply to them. Second, the perjury provision in the bill only applies to developers."deliberately"It only takes effect when a false statement is made, and an inadvertent misjudgment of a model’s capabilities will not trigger the perjury clause (which has been deleted in today’s amendment).

2、The Act does not create entirely new responsibilities.Under existing laws, if a model causes harm, both the model developer and the individual developer may be sued, and it applies to models of all capabilities, and all injured individuals may sue. California’s new bill not onlyLimited scope of regulation, and also limited the right to sue toCalifornia Attorney GeneralandLabour CommitteeOn two entities.

3、The bill will not stifle innovation in California.The bill applies to all companies doing business in California, and even if they move their headquarters out of California, they should comply with relevant regulations (Note from Zhidongxi: California is the world's fifth largest economy in terms of GDP, with a complete technology ecosystem, and it is difficult for technology companies to decouple from California). When California passed data privacy laws and environmental protection laws, many people claimed that this would hinder innovation, but the fact is that California is still leading innovation.

4、Kill switches and security assessment requirements will not hinder the development of open source AI.The bill has now been amended to strengthen the protection of open source AI. The bill's requirements for emergency shutdown of models only apply to models within the control of developers, not uncontrolled open source models. The bill also establishes a new advisory committee to advocate and support safe and reliable open source AI development.

Weishanko also provided a "lazy package" for people who are concerned about this bill, summarizing the six key points of the bill (see the figure below).
▲Key points of the bill (Source: safesecureai)

Conclusion: California has taken an important step, but the future of the bill remains unclear

Although the California AI Safety Act has been opposed by a large number of industry insiders, results from some polling agencies show that California residents generally have a positive attitude towards the bill.

Once it is officially approved, it may face challenges in court from staunch opponents such as a16z, which will suspend the implementation of the bill until the U.S. Supreme Court makes a ruling.

Recently, the U.S. Supreme Court overturned the "Chevron Principle" that supported the right of regulators to speak for nearly 40 years. This principle requires judges to follow the government regulators' interpretation of the law when the legal provisions are unclear. The abolition of this principle reflects that the current U.S. Supreme Court is generally conservative about technology regulation.