2024-08-13
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
“
The EU hopes to take the lead in legislation through artificial intelligence and lead global rules, but it has been widely criticized for quickly introducing strict regulatory measures at the early stages of industry development. The actual implementation progress and effectiveness of the law remain to be seen.
This article has 6256 words and takes about 18 minutes to read.minute
Text|Financial E-Law Fan Shuo
Editor |Guo Liqin
The EU is once again leading the world in the speed of legislation, this time in the hot field of artificial intelligence. However, it remains to be seen how these vague terms will eventually be implemented.
In August, the world's first law to comprehensively regulate artificial intelligence, the European Artificial Intelligence Act (hereinafter referred to as the "AI Law"), officially came into effect.
What has attracted the most attention is that the provisions of the AI Law will be implemented in stages. August 1 is the date when the law will come into effect, but only some of its provisions will take effect. According to the schedule, the prohibited practices stipulated in the bill will apply six months after it comes into effect; the relevant obligations and rules for general artificial intelligence will apply 12 months after August 1; after 24 months, the bill will be fully applicable, but some rules for high-risk AI systems will begin to apply after 36 months.
This set of comprehensive regulatory rules for AI, which is said to be the "strictest" in history, includes all entities in the AI industry chain within the scope of supervision, including AI system providers, users, importers, distributors and product manufacturers that have connection points with the EU market.
The AI Law also expands regulatory tools: it not only introduces a risk-oriented hierarchical management model, but also designs a "regulatory sandbox" to reduce the compliance burden on small and medium-sized enterprises and start-ups. The sharp "teeth" also attract attention:If the relevant provisions are violated, the company may be fined up to 35 million euros (about 270 million yuan) or 7% of the global annual turnover in the previous fiscal year (whichever is higher).
From the beginning of the formulation of the AI Law, the EU hoped to use it to provide guidance on rules in the field of AI.
The AI Law was announced by Margrethe Vestager, the EU's digital chief, in April 2021, when she said: "The EU is taking the lead in developing new global norms to ensure that artificial intelligence is trustworthy." Thierry Breton, the European Commission's internal market commissioner, also said on social media that the law "will become an important guide to help EU start-ups and researchers lead the global AI competition."
However, the rapid introduction of the AI Law when the AI industry is still in its infancy has also attracted widespread criticism.Opponents argue that hasty attempts to regulate the underlying model would limit the use of the technology itself.
Xu Ke, director of the Digital Economy and Legal Innovation Research Center of the University of International Business and Economics, believes that the provisions for the gradual implementation of the "AI Law" are intended to give EU companies sufficient buffer period, which is worth learning from when relevant legislation is being enacted.
After studying the clauses, Zhang Linghan, professor at the Institute of Data Rule of Law at China University of Political Science and Law and Chinese member of the United Nations High-Level Advisory Body on Artificial Intelligence, said that although the AI Law is known to be strict, the limited regulatory measures and innovative regulatory tools provide a necessary flexible development space for AI companies in the EU.It also increases compliance costs for non-EU companies.
Ning Xuanfeng, head of compliance at King & Wood Mallesons, believes that the actual effect and impact of the AI Law will have to wait until all its provisions come into effect, and then be observed in combination with the development and regulatory achievements of the AI industry at that time. The revelation to Chinese legislators is that high-risk AI systems involve multiple relevant entities throughout the entire process of being put into use. The AI Law proposes compliance requirements based on the degree of participation of the relevant entities, and imposes fines for violations of the regulations by the relevant entities. In Ning Xuanfeng's view, China's current Interim Measures for the Administration of Generative Artificial Intelligence Services mainly focus on service providers as a governance tool. If China legislates for AI in the future, it may consider establishing a responsibility mechanism covering all parties in the value chain of the AI system, which will be more helpful in dividing the responsibility boundaries of the various parties, thereby stimulating the motivation for research and development or services.
01
Enlightenment from EU legislation
The EU has carried out detailed discussions covering all parties' concerns during the legislative process. These contents, as well as the orientation and thinking of the entire legislation, have also provided valuable references for other countries or regions.
The EU's fast legislative pace has been criticized internationally.
According to media reports, EU lawmakers held marathon negotiations in December 2023 to get the rules passed. But critics say the rules are not perfect enough and regulators have omitted important details that companies urgently need to enforce the law. Some professionals estimate thatThe EU needs 60 to 70 pieces of secondary legislation to support the implementation of the AI Law.
Kai Zena, a European Parliament assistant who participated in the drafting of the AI Law, admitted:"The law is quite vague. Time pressures leave a lot of things unresolved. Regulators can't agree on these things, so it's easier to compromise."
Xu Ke said that the biggest criticism of the AI Law is that it is based on a risk-based approach and lacks empowerment for individuals. In addition, the implementation of the AI Law in the future will also need to face a large number of coordination issues with other laws and regulations. For example, coordination with legal norms such as the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and the Data Governance Act (DGA) will cause many troubles. In addition, as a general law, the AI Law needs to consider issues of implementation in multiple fields, including finance, medical care, transportation, and other fields.
"In fact, it may cause more problems than it solves," Xu Li believes.
Despite criticism, the legislative ideas of the "AI Law" have been affirmed, mainly taking into account the overall development process of the industry.
Zhang Linghan pointed out,Although it is known as the most stringent AI regulatory law in history, in fact, the regulatory measures of the "AI Law" are limited, and its formulation process also takes into account the promotion of the overall development of the EU AI industry.
Specifically,Zhang Linghan believes that, first,The AI Law provides for exemptions for the development and use of some AI systems.This includes systems specifically developed for military, defense or national security purposes, systems specifically developed for scientific research, and free and open source artificial intelligence systems.
Second, the AI Law proposes a series of supporting measures.To reduce the administrative and financial burden on EU companies, especially small and medium-sized enterprises.
Third, the phased compliance timetable of the AI Law and the regulatory sandbox system it created are respectivelyIt provides a certain amount of time and space for the development of the EU AI industry.
Fourth, the extraterritorial effect of the AI Law willIncreased compliance costs for non-EU businesses, which will limit the willingness of non-EU companies to expand into the European market, which can reduce the competitive pressure on EU companies to a certain extent.
Xu Ke believes that the phased implementation of the AI Law may be due to two reasons. On the one hand, artificial intelligence is a rapidly developing regulatory field. There is still great uncertainty about how to respond to technological and industrial changes after the law comes into effect. Therefore, it is necessary to give the industry a certain amount of time to allow companies to adjust their own technical routes and business models. On the other hand, the AI Law is a rule of collaborative governance. In order for the law to be transformed into technical language and industry standards, it must be through the cooperation between regulatory authorities and enterprises, which also requires that time be reserved for risk communication and coordination between enterprises and regulatory authorities.
Wu Shenkuo, doctoral supervisor at the Law School of Beijing Normal University and deputy director of the Research Center of the Internet Society of China, believes that the AI Law has established a regulatory system with transparency and fairness as the core logic for the research and development and industrial application of artificial intelligence. It will have a long-term impact on the direction of AI research and development and market application, and will also change the market layout in Europe.
So, what insights will the two key regulatory tools designed in the AI Law bring to legislators in other countries?
The overall regulatory framework of the "AI Law" is based on four risk levels of artificial intelligence applications from high to low, similar to a "risk pyramid", with corresponding risk prevention mechanisms established for each category.
According to the potential impact of artificial intelligence on users and society, it is divided into four levels:Unacceptable risk class, high risk class, limited risk class, and minimal risk class.
The most extreme level is that AI systems or applications that pose unacceptable risks, such as those that are considered to pose a clear threat to people's safety, daily life and basic rights, will be completely banned from use.Developers of such AI systems will be fined up to 6% of their global turnover in the previous fiscal year.(For details, see:China and Europe achieve breakthroughs in AI legislation at the same time: Setting up “traffic lights” for ChatGPT?)
Four risk levels of artificial intelligence, source: European Commission official website
For high-risk artificial intelligence systems, the AI Law stipulates a full-process risk management system covering pre-market entry to post-market entry. Before entering the market, companies need to establish and maintain a risk management system, conduct data governance, develop and update technical documents, and provide all necessary information to regulatory authorities.
Wu Shenkuo introduced that risk classification and grading, as a basic methodology, will be reflected in the legislation of different countries and regions to varying degrees. Its significance and institutional value lies in ensuring the proportionality of supervision - while meeting core regulatory concerns and strategic demands, it provides the necessary flexible development space for the development of AI.
In China, the risk-oriented classification and grading system is also reflected in the supervision of algorithms and generative AI. In September 2021, the Cyberspace Administration of China issued the "Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms", which clearly proposed to promote the classification and security management of algorithms, effectively identify high-risk algorithms, and implement precise governance.In July 2023, the "Interim Measures for the Management of Generative Artificial Intelligence Services" issued by the Cyberspace Administration of China and other departments also proposed that generative artificial intelligence services should be subject to inclusive, prudent and classified and graded supervision.
Ning Xuanfeng believes that the risk grading and classification strategy is a kind of strategy based on the actual regulatory needs of AI technology without fully understanding the social risks that may be caused by AI technology.Progressive regulatory strategy.
Xu Ke believes that classification and grading means that regulatory authorities must allocate regulatory resources in a certain proportion. In the future, AI will be embedded in thousands of industries like Office software, soGrading and classification should still be one of the basic ideas for AI regulation.However, the current risk-based classification and grading should be abandoned because it only sees the static risks of AI but not the dynamic benefits. For example, a high-risk AI system often means that its benefits are also high.
The classification and grading in the "AI Law" is different from the existing classification and grading logic in China. According to the license, classification and grading is a word in English - classification. But in China, classification and grading are two concepts: "classification" and "grading". What is more special is that with the advent of general artificial intelligence, the classic classification method of AI application scenarios in China's previous policies may need to be adjusted. For example, the "Internet Information Service Algorithm Recommendation Management Regulations" regulates 5 main types of "application algorithm recommendation technology", including generation and synthesis, personalized push, sorting and selection, retrieval and filtering, scheduling and decision-making, and other information services.However, generative AI can no longer follow this line of thinking, and large models can be applied in all fields.
License Introduction,In the case of classification adjustments, regulatory authorities can still implement graded management of artificial intelligence. China adopts a classification based on the degree of impact of AI.For example, the Interim Measures for the Administration of Generative Artificial Intelligence ServicesGenerative AI services that provide public opinion attributes or social mobilization capabilities are classified and graded.The requirements for algorithm and large model registration are put forward for such service providers. The logic is that some AI systems have a significant impact on the national and social order, so the regulatory authorities will take different regulatory measures. However, Xu Ke believes that this does not mean that China's AI legislation will continue the same regulatory measures in the future, but it is foreseeable thatRegulatory authorities will take into account various factors to determine the impact of AI systems and set corresponding regulatory measures.
Zhang Linghan also said,As generative AI becomes more versatile, China’s grading and classification system will need to be adjusted in a timely manner as technology develops.
Continuing the previous draft, the "AI Law" introduces the "regulatory sandbox" system commonly seen in financial technology regulation.
The so-called "regulatory sandbox" is to create a supervised and controllable safe space for enterprises, especially small and medium-sized enterprises and start-ups, to enter the "sandbox" and actively participate in the development and testing of innovative AI systems under the strict supervision of regulators, and then put these AI systems into the service market. If major risks are found during the development and testing process, the risks should be mitigated immediately. If the risks cannot be mitigated or controlled, the open testing should be suspended.
The "regulatory sandbox" was first created by the UK Financial Conduct Authority (FCA) in 2015. According to the license, the "regulatory sandbox" system is an EU initiative to support innovation, aiming to achieve technological innovation under controllable risk conditions. The "regulatory sandbox" can create a controllable environment to take specific regulatory measures on some AI applications, giving AI that may be risky a trial and error space.
Wu Shenkuo also said that the "regulatory sandbox" is one of the features of the AI Law. In the face of new technologies and applications such as AI, it can help or promote continuous dialogue between EU regulators and the regulated.To address the dynamic balance between technological development and regulatory concerns.
Currently, several EU member states are piloting a “regulatory sandbox” system in the field of AI.
France has been piloting the “regulatory sandbox” system since 2022, focusing onEducation IndustryCurrently, there are 5 companies participating in the pilot.
In May 2024, Spain sought opinions on the implementation of the “Regulatory Sandbox”. Spain clarifiedHigh-risk AI systems, general AI, and basic models in eight areas, including biometrics, critical infrastructure, and education and training, are subject to the “regulatory sandbox” system.It also provides more detailed regulations on project access, document submission, risk management, exit conditions, etc.
Norway and Switzerland have also conducted similar pilots. For example, Ruter, a Norwegian public transportation provider, conducted a risk assessment on its online travel recommendation AI algorithm based on the “regulatory sandbox”. The five pilot companies in Switzerland are engaged inUnmanned agricultural machinery, drones, machine translation, operation error correction, parking scheduling, etc.field of research and development.
In China, the "regulatory sandbox" system is also used in financial technology supervision. On January 31, 2019, the State Council agreed in the "Approval of the Comprehensive Pilot Work Plan for Comprehensively Promoting the Opening-up of Beijing's Service Industry" that Beijing should explore the "regulatory sandbox" mechanism on the premise of compliance with laws and regulations. On December 5, 2019, the People's Bank of China approved and supported Beijing to take the lead in carrying out the pilot of financial technology innovation supervision in the country and explore the Chinese version of the "regulatory sandbox".
In Xu Ke's view, the "regulatory sandbox" is not only a technological innovation, but also a regulatory innovation. This system is actually conducting experiments in two aspects: one is experimental governance of the regulated; the other is to allow regulators to test the rationality and necessity of regulatory rules within the sandbox.A mature "regulatory sandbox" is a collaborative innovation between regulators and regulated entities. That is, regulators adjust their regulatory rules based on the feedback from the "sandbox", and regulated entities also adjust their business models and technology development directions based on the testing and verification of the "sandbox".
But Zhang Linghan reminded,The "regulatory sandbox" places high demands on the monitoring and evaluation capabilities of regulatory agencies, and its actual effects and industry impacts remain to be observed.
02
Global impact remains to be seen
The EU has always been at the forefront of global digital legislation and has attempted to export standards globally through the "Brussels effect."
The "Brussels effect" refers to the process in which the EU regulates its own internal single market, and multinational companies accept these standards through compliance, and gradually make EU standards the world standard. The implementation of GDPR is a good example. GDPR is the code and policy governing the privacy of personal data in the EU, which came into effect on May 25, 2018. Multinational technology companies comply with data processing based on the requirements of GDPR, and use this standard for data processing outside the EU, making EU standards the world standard.
Zhang Linghan believes that the EU is an important market for AI, but its own AI industry is relatively weak. The AI Law attempts to reproduce the GDPR global governance framework, further extend the "Brussels effect", and thereby gain bargaining chips in global industry competition and redistribute benefits. The world is watching to see whether the AI Law can live up to expectations.
There is a prerequisite for the implementation of the "AI Law": both EU local technology companies and multinational technology companies are willing to accept compliance challenges.
Cecilia Bonefeld Dahl, director general of Digital Europe, warned that this approach led to bad regulation.This will hinder competition between Europe and the United States in the field of establishing new artificial intelligence companies in the future.“The additional compliance costs for EU companies are further reducing our profits,” she said.“While the rest of the world is hiring programmers, we’re hiring lawyers.”
Zhang Linghan said that the entry into force of the AI Law has brought multi-dimensional challenges to technology companies operating in Europe, and it has made corresponding provisions in various aspects such as AI product development, testing, deployment and application. In her opinion, technology companies must not only increase their investment in compliance costs to ensure the construction of a compliance system in line with the EU, but also continue to evaluate and monitor the target market, and make necessary adjustments in the R&D process and functional design to meet the high standards of the AI Law in terms of security and transparency.
Ning Xuanfeng believes that the most intuitive thing is that, with respect to providers of high-risk AI systems, the impact of the "AI Law" will at least be reflected in the compliance costs of relevant entities in adapting to relevant regulatory requirements; with respect to AI systems with unacceptable risks, relevant entities may not even be able to continue to provide relevant systems, which may further cause corresponding economic losses.
So, how should China’s AI companies respond?
Zhang Linghan suggested thatCompliance levelFirst, Chinese companies should comprehensively evaluate the requirements of the AI Law, adjust compliance strategies in a timely and dynamic manner, and establish an internal compliance management system; second, they should enhance their technological innovation capabilities and target the technical characteristics and product functions of AI systems.Formulate risk plans;at last,International cooperation and exchanges should be strengthened.Pay close attention to overseas legislation and law enforcement developments to enhance international competitiveness.
Xu Ke believes that the EU will not succeed in reproducing the "Brussels" effect of GDPR. There are three prerequisites for the success of GDPR: the first is broad jurisdiction, the second is high legal requirements, and the third is high fines. At present, although the AI Law is consistent with GDPR in these three aspects, an important difference is that data needs to flow across borders, while artificial intelligence systems can be divided. The mobility of data allows regulators to indirectly influence the world by controlling the inflow and outflow of data from the EU, but this cannot be reflected in AI regulation.AI companies can completely set up segmented markets and do not need to comply with EU rules when developing the AI industry outside the EU.
Licensing reminder: It is important to note that the AI Act has two exemptions: one is the exemption for AI research and development. Many companies can do research and development in the EU and provide services outside the EU. The second is the open source exemption, which only restricts some closed-source AI systems. For example, Google can indirectly influence products and services in the EU market through open source artificial intelligence.
Xu also pointed out that GDPR gives individuals very strong rights, and these private rights have prompted individuals and NGOs to initiate legal proceedings, thereby achieving supervision through litigation. The AI Law essentially stipulates the product liability of AI systems.Individuals are not granted new rights, so their implementation can only rely on the EU's administrative enforcement. Based on the principle of sovereignty, relevant law enforcement can actually only be carried out within the EU, which also brings challenges to the global influence of the "AI Law".
However, Sun Yuanzhao, a legal scholar in the United States, believes that the compliance requirements of the AI Law may cause some inconvenience to the operation of enterprises, but this is true for any compliance requirements. From a positive perspective, this may avoid major safety accidents to a certain extent, and if an accident unfortunately occurs, we can brainstorm in the shortest possible time to jointly find out the problem and solution, which will also help build social confidence and promote the healthy and orderly development of the overall market.