news

the dust has settled on sb 1047! the governor objected, and li feifei and others had a new mission

2024-10-01

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

machine heart report

editor: zhang qian

the original intention is good, but the approach is still open to question.

just now, sb 1047, which has been discussed for more than half a year, finally came to an end: california governor gavin newsom vetoed the bill.

the full name of sb 1047 is "safe and secure innovation for frontier artificial intelligence act", which aims to establish clear security standards for high-risk ai models to prevent them from being abused or causing catastrophe. as a result of.

specifically, the bill seeks to regulate artificial intelligence at the model level, applying to models trained above certain computational and cost thresholds. however, if calculated strictly according to the prescribed calculation and cost thresholds, all mainstream large-scale models on the market will be considered to be "potentially dangerous." moreover, the bill requires the development company behind the model to bear legal responsibility for the downstream use or modification of its model, which is believed to have a "chilling effect" on the release of open source models.

bill link: https://leginfo.legislature.ca.gov/faces/billtextclient.xhtml?bill_id=202320240sb1047

the bill was enacted in february this yearsenatewas proposed, and has been controversial ever since. li feifei, yann lecun,andrew ngall are opposed. some time ago, li feifei personally wrote an article explaining the many adverse effects that the bill may bring. dozens of teachers and students at the university of california also signed a joint letter to oppose this bill (see "li feifei personally wrote an article, and dozens of scientists signed a joint letter to oppose california's ai restriction bill"). however, there are also many people who support the bill, such as musk, hinton, and bengio. before the bill was submitted to the governor of california, there were many heated debates between the two sides.

now, everything has settled. in his veto statement, governor newsom cited multiple factors that influenced his decision, including the impact the bill would have onaithe burden on companies, california’s leadership in the field and criticism that the bill may be too broad.

after the news was released, yann lecun expressed his gratitude to the california governor on behalf of the open source community.

ng affirmed yann lecun's efforts to explain the shortcomings of the bill to the public.

however, some people are happy and some are worried - california senator scott wiener, the sponsor of the bill, said he was very disappointed with the results. he wrote in a post that the veto "is a setback for anyone who believes in the oversight of large corporations that are making critical decisions that impact public safety and welfare and the 'future of our planet.'" .

it should be noted that the rejection of sb 1047 does not mean that california is turning a blind eye to ai safety issues, as governor newsom mentioned in his statement. at the same time, he also announced that li feifei and others will help lead california to formulate responsible protective measures for the deployment of generative artificial intelligence.

california governor: sb 1047 has many problems

regarding sb 1047, california governor newsom has the final say. why would he veto the bill? a statement provided the answer.

statement link: https://www.gov.ca.gov/wp-content/uploads/2024/09/sb-1047-veto-message.pdf

an excerpt from the statement is as follows:

california is home to 32 of the world's 50 leading al companies, pioneers of important technological advances in modern history. we are a leader in this field because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of free ideas. as a future manager and innovator, i take my responsibility to regulate this industry seriously.

sb 1047 exaggerates the discussion about the possible threats posed by the deployment of al. key to the debate is whether the threshold for regulation should be based on the cost and number of calculations required to develop an al model, or whether the actual risk of the system should be assessed without taking these factors into account. this global discussion comes as al's capabilities continue to expand at an alarming rate. at the same time, strategies and solutions to address the risk of catastrophic hazards are evolving rapidly.

by focusing only on the most expensive and largest models, sb 1047 creates a regulatory framework that could give the public a false sense of security about controlling this rapidly evolving technology. smaller, specialized models may be just as dangerous, or even more dangerous, than the models targeted by sb 1047.

adaptability is crucial as we race against time to regulate a technology that is still in its infancy. this requires a delicate balance. while sb 1047 has good intentions, it does not take into account whether ai systems are deployed in high-risk environments, involve critical decisions, or use sensitive data. instead, the bill imposes strict standards on even the most basic functions—as long as they are deployed in large systems. i don't think this is the best way to protect the public from the actual threats of this technology.

i agree with the author that we cannot wait until a major disaster occurs to take action to protect the public. california will not abdicate its responsibilities. security protocols must be adopted. proactive safeguards should be implemented, and there must be clear and enforceable consequences for bad actors. what i disagree with, however, is that in order to keep the public safe, we must settle for a solution without empirical trajectory analysis of ai systems and capabilities. ultimately, any framework that effectively regulates ai needs to keep pace with the technology itself.

to those who say we're not solving the problem, or that california has no role in regulating the potential impact of this technology on national security, i disagree. taking this approach only in california may make sense, especially in the absence of federal action from congress, but it must be based on empirical evidence and science.

the us ai safety institute, part of the national institute of science and technology, is developing national security risk guidance based on evidence-based methods to protect against clear risks to public safety.

pursuant to an executive order i issued in september 2023, agencies across my administration are conducting risk analyzes of potential threats and vulnerabilities to the use of ai in california’s critical infrastructure.

these are just a few examples of the work we are doing, led by experts, to introduce ai risk management practices to policymakers that are rooted in science and facts.

through these efforts, i have signed more than a dozen bills in the past 30 days to regulate specific and known risks posed by ai.

more than a dozen bills signed in 30 days, california’s intensive ai safety initiative

in the statement, newsom mentioned that he signed more than a dozen bills in 30 days. these bills cover a wide range of topics, including cracking down on explicit deepfake content, requiring the addition of watermarks to ai-generated content, protecting digital portraits of performers, the copyright of voices or likenesses of deceased figures, consumer privacy, and exploring the impact of incorporating artificial intelligence into teaching. and many other aspects.

link to bill list: https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/

in response to the lack of scientific analysis in sb 1047's assessment of ai risks, the governor announced that he has asked the world's leading generative ai experts to help california develop feasible guardrails for deploying generative ai.

in addition to li feifei, tino cuéllar, a member of the national academy of sciences committee on the social and ethical impact of computing research, and jennifer tour chayes, dean of the school of computing, data science and society at the university of california, berkeley, are also members of the program.

their work focuses on empirical, science-based trajectory analysis of cutting-edge models and their capabilities and attendant risks. this work has a long way to go.

参考链接:https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/