2024-09-30
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
editor: editorial department hyz
[introduction to new wisdom]happy celebration! just now, the governor of california announced that he would veto the california ai restriction bill. lecun, li feifei and ng enda rushed to tell each other excitedly and celebrated with their foreheads and hands. bengio and hinton, who strongly support the bill, have remained silent. openai, google, and meta have all escaped disaster.
heavy!
early this morning, california governor gavin newsom officially announced that he would veto the sb-1047 bill!
for developers around the world, the cancellation of the sb-1047 bill means that the open source models of major manufacturers such as meta and google will continue to be available again.
this decision can be said to be widely expected. the big guys in the ai circle rushed to tell each other excitedly. ng enda, lecun, and li feifei, who played a strong role in the opposition, were especially happy.
what does the california governor's veto mean?
the sb-1047 bill is really the end of it.
the "notorious" sb-1047 stipulates that sb 1047 will prevent ai systems from causing mass casualties or triggering cybersecurity incidents that cost more than $500 million by holding developers accountable.
as soon as the news came out, it caused huge opposition from academia and industry.
you know, 32 of the world's top 50 genai companies are based in california, and they play a key role in defining the future of ai.
according to a press release from the california office, gavin signed 18 genai bills in the past month, but only vetoed sb-1047.
over the past 30 days, the governor has signed a series of bills to combat deepfake content, protect performers’ digital image rights, and more
this bill has good intentions, but in practice it has some serious problems.
"i don't think this is the best way to protect the public from the real threats posed by ai," he said.
sb-1047 does not take into account whether the ai system is deployed in a high-risk environment, involves critical decisions, or uses sensitive data. instead, the bill applies strict standards to the most basic function of deploying large model systems.
in the process of sb-1047 being rejected, the role of ai godmother li feifei cannot be ignored.
stanford professor li feifei happily said that she was deeply honored to work with stanfordhai to pave the way for responsible ai governance in california.
ai guru andrew ng also thanked li feifei for her public opposition to sb-1047, saying that her efforts can promote more rational ai policies to protect research and innovation.
just a few days ago, when the sb-1047 bill was about to be finalized, ng and lecun were still anxiously calling for and launching votes, fearing that once passed, open source ai and the entire ai ecosystem would have a chilling effect.
today, ai companies across california are finally breathing a sigh of relief.
governor's veto letter: not signed
"i hereby return senate bill 1047 without my signature."
in the letter, the governor acknowledged that sb-1047 overstates the threat posed by ai deployment.
moreover, by focusing only on the most expensive large-scale models, the bill actually gives the public a "false sense of security."
he noted that even smaller proprietary models can be just as dangerous.
in short, the bill’s regulation of ai comes at the expense of “curtailing innovation that benefits the public interest.”
the governor said that the bill applies the strictest standards to the most basic functions, which is not the best way to protect the public from the threats of ai technology.
the best solution is a plan that is not based on analysis of empirical development trajectories of ai systems and capabilities.
ai boss hates it
this ending can be described as a very satisfactory ending for most people except bengio and hinton.
when it comes to sb-1047, a lot of big guys hate it.
the whole story is this.
last year, the governor of california signed an executive order emphasizing that california should be more prudent in the face of genai and make ai more ethical, transparent, and trustworthy.
in february this year, california directly drafted a bill called the "sb-1047 frontier ai large model safety innovation act," which provides more specific regulations for the safe and transparent use of large models.
however, there are many unreasonable contents in it, which simply choke certain companies by name.
models costing more than 100 million must be prevented from causing "significant harm"
for example, there is a regulation that stipulates that if a large model whose development and training costs exceed 100 million us dollars and whose floating point operations exceed 10^26 times is used by someone to do illegal things after being open sourced, the model developer will also be severely punished.
you should know that meta's llama 3 model, google's gemma model, etc. all meet this condition.
this provision is obviously extremely controversial.
for example, if someone hacks into an autonomous driving system and causes an accident, will the company that developed the system also be held accountable?
according to the bill, developers are required to evaluate derivatives of their models and prevent any harm they may cause, including if customers fine-tune the models, otherwise modify the models (jailbreak), or combine them with other software.
however, once open source software is released, people can download the model directly to their personal devices, so developers have no way of knowing the specific operations of other developers or customers.
in addition, there are many vague definitions in the bill.
for example, "critical injuries" to ai models are described as massive casualties, losses exceeding $500 million, or other "equally serious" injuries. but under what conditions will developers be held accountable? what responsibilities are there?
the bill is silent on this.
moreover, the bill applies to ai models that spend more than $100 million to train, or developers who spend more than $10 million to fine-tune existing models, leaving many small technology companies within the scope of the attack.
sb-1047 also has some unreasonable provisions. for example, if a company opens its model to other countries for use, it must submit all customer information, including the customer's id card, credit card number, account number, etc.
developers must also create testing procedures that address ai model risks and must hire third-party auditors annually to evaluate their ai security practices.
for those ai products built based on models, corresponding security protocols need to be developed to prevent abuse, including an "emergency stop" button that shuts down the entire ai model.
bill criticized for turning a blind eye to real risks
more critics believe that this bill is too unfounded.
not only will it hinder ai innovation, it won’t help the security of today’s ai.
what’s even more ironic is that the bill uses a so-called “emergency switch” to prevent the end of the world, but turns a blind eye to existing security risks such as deepfakes and false information.
although later amendments were more loosely worded, reducing the california government’s power to hold ai laboratories accountable.
but even so, sb-1047 will have a considerable impact on major manufacturers such as openai, meta, and google.
for some start-ups, this blow can even be devastating.
now that the dust has settled, both large companies and small startups can breathe a sigh of relief.
turing giant breaks up
sb-1047 even caused the turing big three to "break up" over this.
big bosses represented by lecun, li feifei, and ng enda have publicly opposed and dissatisfied many times.
lecun even copied the original message he used when he asked for a moratorium on ai research - please suspend ai legislation for six months!
the other two of the turing big three, yoshua bengio and geoffrey hinton, surprisingly strongly supported the passage of this bill.
i even feel that the current terms are a bit too loose.
as senior artificial intelligence technology and policy researchers, we write to express our strong support for california senate bill 1047.
sb 1047 outlines the basic requirements for effective regulation of this technology. it does not implement a licensing system, does not require companies to obtain permission from government agencies before training or deploying models, relies on companies to assess risks on their own, and does not hold companies strictly accountable even in the event of a disaster.
this is relatively lenient legislation relative to the scale of the risks we face.
undoing the basic measures of the bill would be a historic mistake—one that will become even more apparent in a year as the next generation of more powerful ai systems are released.
but bengio and hinton are obviously not mainstream.
in summary, the positions of various technology giants and big shots are as follows.