news

"AI Godmother" Fei-Fei Li personally wrote an article: California AI Safety Bill will damage the US ecosystem | Titanium Media AGI

2024-08-07

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

According to Titanium Media App on August 7, Feifei Li, the "godmother of AI", academician of the U.S. National Academy of Engineering, the U.S. National Academy of Medicine, the American Academy of Arts and Sciences, the first Sequoia Chair Professor at Stanford University, and dean of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), wrote an article in Fortune this morning pointing out that the AI ​​safety bill that is about to be implemented in California will harm developers, academia, and even the entire U.S. AI ecosystem, but at the same time it will not solve the potential hazards of AI.

“Today, AI is more advanced than ever before. However, with great power comes great responsibility. Policymakers, civil society, and industry are seeking a governance approach to minimize potential harms and shape a safe, human-centered AI society. I applaud some of these efforts but am cautious about others; California’s Frontier AI Model Safety Innovation Act (SB-1047) falls into the latter category. This well-intentioned legislation will have significant unintended consequences, not only for California, but for the entire country,” said Fei-Fei Li.

It is reported that in July this year, the California Senate approved a version of the SB-1047 bill, which requires developers to prove that their AI models will not be used to cause harm. This adds more restrictions than the more than 600 AI bills proposed by state legislators this year, which has attracted attention. This highly destructive proposal may be signed into law by California Governor Gavin Newsom in August this year.

According to the definition of the bill, Meta's Llama-3 is defined as a "cutting-edge model" with a training cost of more than $100 million. If someone uses the model for illegal purposes, Meta will also be severely punished. At the same time, Section 22603(a)(3)(4) of the bill requires "limited liability exemption" to submit proof to government departments and stop the operation of the model when there is an error, while Section 22603(b) requires developers to report potential AI safety incidents of any model. If the developer cannot fully control the various derivative versions based on its model, if a safety incident occurs, the responsibility will belong to the first developer——This is equivalent to a joint liability

In addition, Section 22604 (a) (b) of the Act stipulates that when users use their "frontier models" and computing resources, developers need to submit all customer information, including customer identity, credit card number, account number, customer identifier, transaction identifier, email, and phone number. Moreover, information must be submitted once a year, and user behavior and intentions will be evaluated. All user information will be backed up for 7 years and will also be filed with the Customs and Border Protection Agency.

California is a very special place in the U.S. It has many famous universities such as Stanford, Caltech, and the University of Southern California. It is also home to Google, Apple,OpenAI, Meta and other technology giants. Therefore, Turing Award winner and Meta's chief AI scientist Yann LeCun, Li Feifei, Andreessen Horowitz, founding partner of investment institution a16z, and Andrew Ng, a visiting professor of computer science and electrical engineering at Stanford University, all expressed their opposition.

Among them, Yang Likun warned that the bill’s “joint liability clause will put open source AI platforms at great risk… Meta will not be affected, but AI startups will go bankrupt.” It is reported that if these models are used maliciously by others, the bill requires AI developers to bear civil or even criminal liability for the models they develop.

Horowitz believes that California's anti-AI bill, though well-intentioned, could undermine the U.S. tech industry because it is misguided, just as the future of technology is at a critical crossroads. The United States needs leaders to recognize that now is a critical moment to take smart and unified AI regulatory action.

Andrew Ng wrote that California SB-1047 will stifle the development of open source big models. He further pointed out thatAI applications should be regulated, not the big models themselves

Fei-Fei Li believes that AI policies must encourage innovation, set appropriate limits, and mitigate the impact of those limits. Policies that do not do so will fail to achieve their goals at best and lead to unintended and serious consequences at worst. If SB-1047 is passed into law, it will damage the United States' fledgling AI ecosystem, especially those parts that are already disadvantaged by today's tech giants: the public sector, academia, and "small tech." SB-1047 will unnecessarily punish developers, stifle our open source community, and hinder academic AI research while failing to solve the real problems it is designed to solve.

Fei-Fei Li gave four reasons:

First, SB-1047 will overly penalize developers and stifle innovation. If an AI model is misused, SB-1047 will hold both the responsible party and the original developer of the model accountable. It is impossible for every AI developer (especially budding programmers and entrepreneurs) to predict all possible uses of their model. SB-1047 will force developers to back off and take defensive actions - exactly what we are trying to avoid.

Second, SB-1047 will hobble open source development. SB-1047 requires that all models above a certain threshold include a “kill switch,” a mechanism that can shut down a program at any time. If developers worry that the programs they download and build will be deleted, they will be more reluctant to write code and collaborate. This kill switch will destroy the open source community—the source of countless innovations, not only in AI, but in everything from GPS to MRI to the internet itself.

Third, SB-1047 will undermine AI research in both the public sector and academia. Open source development is important to the private sector, but it is also critical to academia, which cannot advance without collaboration and model data. Take, for example, computer science students working on open AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the appropriate models and data? A kill switch will further undermine the efforts of these students and researchers, who are already at a data and compute disadvantage compared to big tech companies. SB-1047 will sound the death knell for academia, which should be doubling down on public sector AI investments.

Most concerningly, the bill does not address the potential harms of AI development, including bias and deepfakes. Instead, SB-1047 sets an arbitrary threshold that regulates models that use a certain amount of computing power or cost $100 million to train. Rather than providing safeguards, this measure will only limit innovation in a variety of fields, including academia.

Li Feifei pointed out that today, academic AI models are below this threshold, but if the United States rebalances private and public sector AI investments, academia will be regulated by SB-1047, and the US AI ecosystem will be worse off. Therefore, the United States must take the opposite approach.

"In many conversations with President Biden over the past year, I have expressed the need for a 'moon shot mentality' to advance AI education, research, and development in our country. However, SB-1047's restrictions are too arbitrary and will not only hit California's AI ecosystem, but will also have troubling downstream effects on AI across the country." Fei-Fei Li pointed out that she is not opposed to AI governance. Legislation is critical to the safe and effective development of AI. But AI policy must promote open source development, propose unified and reasonable rules, and build consumer confidence. SB-1047 does not meet these standards.

Li Feifei emphasized that she proposed a cooperative proposal to the drafter of the bill, Senator Scott Wiener: Let us work together to formulate AI legislation and truly build a future society that is technology-led and people-centered.

“In fact, the future of AI depends on it. California—as a pioneering entity and home to our nation’s most robust AI ecosystem—is the heart of the AI ​​movement, and progress in California will impact the rest of the country,” Li concluded.