news

chinese experts from the united nations high-level ai advisory body: technological development makes ai risks more concentrated

2024-09-21

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

beijing news shell finance (reporter bai jinlei) from september 19 to september 21, the 2024 beijing cultural forum was held in beijing. zhang linghan, professor of the data rule of law research institute of china university of political science and law and chinese expert of the united nations high-level artificial intelligence advisory body, attended the parallel forum "cultural trends: emerging business forms and technology integration" and delivered a keynote speech on the theme of "guiding and regulating the healthy development of artificial intelligence".
talking about the comprehensive risks brought by artificial intelligence, zhang linghan believes that the current legal entity boundaries of "technology supporters, service providers and content producers" have been gradually eliminated with the development of technology, and the risks are becoming more and more concentrated. therefore, the world has begun to actively discuss the principles of artificial intelligence governance, and intergovernmental international organizations have become an important voice.
taking 2024 as an example, on march 21, 2024, the united nations general assembly unanimously adopted the resolution "seizing the opportunities brought by safe, reliable and trustworthy artificial intelligence systems to promote sustainable development", calling for seizing the opportunities brought by artificial intelligence technology and realizing technological innovation for the benefit of all mankind. in may 2024, the ai ​​(artificial intelligence) for humanity global summit was held in geneva, and the participants discussed the digital divide in the context of artificial intelligence, combating the generation of false information in the biggest election year, achieving sustainable development of ai, and ai risk identification and supervision.
in zhang linghan's view, the eu's risk-grading governance system is currently at the forefront. the eu has established a four-level ai risk regulatory framework and created a differentiated obligation system. on the one hand, it strictly restricts the development and deployment of ai with significant social risks, and on the other hand, it relaxes the popularization of low-risk ai technology in the eu. in addition, the eu has also set four special obligations for general artificial intelligence with systemic risks: risk testing involving independent experts, prior review of training data, disclosure of data copyright information, and even infrastructure security.
she also shared the long-term plan for china's ai governance at the scene. the first step is that the industry goal by 2020 is to synchronize the overall technology and application of artificial intelligence with the world's advanced level, and the governance goal is to initially establish ethical norms and policies and regulations for artificial intelligence in some fields; the second step is that the industry goal by 2025 is to achieve major breakthroughs in the basic theory and application of artificial intelligence, and the governance goal is to initially establish a legal system, ethical norms and policies for artificial intelligence, and form the ability to assess and control artificial intelligence security; the third step is that the industry goal by 2030 is that the overall theory, technology and application of artificial intelligence reach the world's leading level, and the governance goal is to build a more complete legal system, ethical norms and policies for artificial intelligence.
edited by cheng zijiao
proofread by zhao lin
report/feedback