news

california vetoes! the united states cancels this bill, openai, google, and meta survive the disaster

2024-09-30

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

california governor gavin newsom officially vetoed the sb-1047 bill early this morning!

it is worth mentioning that in the past 30 days, gavin has signed a total of 17 bills on the supervision of large models and the safe use of generative ai, but only vetoed 1047. it seems that he is not a confused person.

in this process of active rejection, ai industry leader-ng enda, turing award winner-yann lecun, stanford professor-li feifei, etc. played an important role. ng, in particular, has made great contributions to calling on technology professionals to boycott the bill on many public occasions.

today is also an important day for developers around the world. they can continue to use large open source models from major american technology companies such as meta and google.

a brief introduction to the sb-1047 bill

"aigc open community" has written about the sb-1047 bill 6 times in total, and it is one of the media that pays the most attention to this event in china. let me briefly introduce this bill and why it will bring a lot of resistance to the development of open source large models and generative ai.

sb-1047 was drafted by california on february 7 this year. its full name is "sb-1047 frontier ai large model security innovation act." it is mainly used to enhance the safety, transparency, and usage regulations of large models.

but there are a lot of unreasonable contents. for example, it stipulates that the development and training cost of large models exceeds 100 million us dollars, such as meta’s open source llama-3 series; google’s gemma series. once it is open sourced and someone uses it to do some illegal things, then the source developer will also be severely punished.

in terms of supervision, when a large company opens its model to users in other countries, it needs to submit all customer information, including the customer's identity, credit card number, account number, customer identifier, transaction identifier, email, and phone number.

at the same time, information must be submitted once a year and user behavior and intentions are evaluated. all user information will be backed up for 7 years and will also be filed with the customs and border administration.

there are many similar unreasonable clauses. the people who drafted this bill want to completely kill open source large models and the export of large models. therefore, technology giants such as openai, meta, and google will become the biggest victims of sb-1047.

in addition, this area of ​​california in the united states is very special in the field of science and technology. it is home to the headquarters of google, meta, openai, apple, intel, and tesla. it is also home to the world's top computer schools such as stanford, university of california at berkeley, caltech, and university of southern california. it is considered one of the global science and technology innovation centers.

once this bill is implemented, not only large companies will be hit hard, but even some start-up small businesses will be almost destroyed. when the bill came out, many people said that some technology companies would move away.

reasons for vetoing sb-1047 bill

according to the veto information published on the official website of california, governor gavin said that 32 of the world's 50 top ai companies are in california, and they are crucial to the development and innovation of large ai models. sb-1047 has good intentions, but it has some serious problems with its implementation.

by focusing only on the most expensive, large-scale ai models, sb-1047 establishes a regulatory framework that may give the public a false sense of security in controlling this rapidly evolving technology. smaller, specialized models may be equally or even more dangerous than the large models targeted by sb-1047, hindering ai technology innovation.

additionally, sb-1047 lacks flexibility and does not provide enough flexibility for different types of ai applications, which could lead to confusion and uncertainty during implementation.

gavin pointed out that the bill does not take into account whether the ai ​​model is deployed in a high-risk environment, which is very important because the same stringent regulatory measures may not be required in a low-risk environment. at the same time, the bill does not clarify which types of decisions are critical decisions, nor does it define what sensitive data is, which may lead to deficiencies in protecting personal privacy and data security.

gavin emphasized that a one-size-fits-all approach like sb-1047 will inhibit innovation and the development of ai technology in certain areas, and the provisions of the bill may be difficult to implement because they do not provide clear guidance for different types of ai models and applications. the best way to protect the public from the real threats of ai technology should be more nuanced and targeted, rather than one-size-fits-all solutions.

below are the many unreasonable accusations made by andrew ng, yann lecun, li fei-fei and others about the sb-1047 bill. you can also check out the past interpretations of the bill by the "aigc open community".