news

Shi Qian Xiao Chao | Build a solid defense line for AI safety supervision system

2024-08-11

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

The picture shows the Tesla Optimus Prime robot. Photographed by our reporter Yuan Jing
The Third Plenary Session of the 20th CPC Central Committee proposed: "Strengthen the construction of the network security system and establish an artificial intelligence security supervision system." This is an important deployment made by the CPC Central Committee to coordinate development and security and actively respond to the security risks of artificial intelligence. In order to predict and effectively respond to the risks arising from the development of artificial intelligence technology and the real impact of artificial intelligence applications, and effectively promote the application and development of artificial intelligence technology, it is necessary to deeply analyze the new characteristics of social risks caused by intelligent technology, make forward-looking predictions, scientific constraints, and correct guidance on social risks, and build a comprehensive governance system for artificial intelligence social risks that takes into account both development and security, so as to inject momentum into the improvement of social governance efficiency.
Four new characteristics of social risks in the era of artificial intelligence
At present, the world has entered an era of "risk society" with high uncertainty. Artificial intelligence technology is changing the operating logic and rules of modern society. The value concepts and behaviors of human society are being systematically reconstructed. The resulting social risks mainly present four new characteristics: contagion, cascading, derivative and hysteresis:
First, AI has increased the speed and scope of risk information diffusion, and social risks are contagious. AI technology reduces the difficulty of information generation and increases the breadth and depth of information diffusion. Information diffusion has greatly expanded the scope of impact of risk events, forming a strong contagiousness. At the same time, AI can identify and amplify users' emotions and behaviors through sentiment analysis and behavior prediction. If a type of risk event triggers strong emotional reactions such as panic and anger, the AI ​​system may give priority to recommending and disseminating such information, further exacerbating public sentiment and making risk information highly contagious and contagious. If it is not effectively supervised and controlled, it will have a great impact on social security and social order.
Second, AI blurs the boundaries between the virtual and real social systems, and social risks are cascading. AI technology makes people, people and things, and things highly coupled, blurring the boundaries between the virtual world and the real world, forming a complex network system. Many social activities are carried out through virtual platforms and embedded in real situations. Data leaks in the virtual world may lead to identity theft in the real world, and public opinion on social media can quickly influence social behavior and decision-making in the real world. Conversely, security incidents in the real world will also affect the security of the virtual world through data transmission, and the interaction between the virtual and the real will cause a cascading effect of social risks. Different social systems are highly interconnected under the empowerment of intelligent technology, and many decision-making processes are automated. However, wrong decisions made by automated systems may trigger a series of chain reactions, leading to the cascading spread of risks.
Third, AI has triggered a series of secondary risks and risk chains, and social risks are derivative. The application of AI technology may trigger initial risks, which, in the process of solving or mitigating risks, will further trigger a series of secondary risks, forming a complex risk chain. Risk events can spread rapidly between different fields and be transmitted at multiple levels, forming secondary risks. For example, cyber attacks not only lead to information leakage, but may also cause accidents in the real world by affecting the Internet of Things, such as traffic jams and medical equipment failures. Data errors or improper algorithms may also derive a series of secondary risks. Taking algorithmic bias as an example, the unfairness of the decision-making mechanism caused by it often leads to social dissatisfaction and conflict. Risk events usually cause changes in public behavior and psychology, further deriving new secondary risks and forming a risk chain.
Fourth, artificial intelligence hides the impact of some risk events, and social risks have a lag. Artificial intelligence technology relies on a large amount of data for training and operation, and relies on algorithms for decision-making. Data leakage or abuse may not have an immediate and obvious impact. The decision-making process and results may not show problems in the short term, but over time, the cumulative effects of data abuse and algorithmic bias will gradually emerge. At the same time, artificial intelligence technology is also highly dependent on technical infrastructure and automated systems. These system failures or design defects may not be easy to detect in the short term, but over time, the gradual accumulation of problems may lead to systemic risks. The application of intelligent technology has a profound impact on social behavior and psychology. For example, the long-term use of intelligent recommendation systems may lead to the information cocoon effect, narrowing the user's field of vision, and thus affecting the diversity and inclusiveness of society.
Four strategies for achieving “governance for AI”
In order to effectively respond to the social risks and new characteristics that may be caused by the development of artificial intelligence technology, it is necessary to make targeted changes to the traditional risk governance model. While using artificial intelligence technology to improve the efficiency of risk identification and early warning and achieve "governance based on artificial intelligence", it is also necessary to effectively prevent and warn of the risks that may be brought about by the application of artificial intelligence technology, realize "governance for artificial intelligence", and build a two-way governance logic of "technology empowering social governance, and governance preventing technology abuse".
First, use multimodal data to identify complex interactive relationships and improve the accuracy of risk identification and early warning. Multimodal data refers to data obtained from different fields or perspectives for the same descriptive object, which is used to represent different forms of data or different formats of the same form. Multimodal data fusion technology can fully explore and utilize the complementary information between data, more comprehensively characterize the characteristics of objects, identify or infer more complex interactive relationships, and realize real-time perception, correlation analysis and situation prediction of complex social giant systems. It can not only improve the accuracy of risk identification, build a more flexible risk assessment and early warning framework, and realize the forward-looking and dynamic nature of risk management, but also effectively integrate the independent decision-making processes in the risk early warning and response process, and provide basic methods and tools for the collaboration between different decision-making subjects. Therefore, by integrating different modal data from different fields, data of different modalities such as text, images, audio and sensor data can be comprehensively analyzed, and machine learning, deep learning and other technologies can be applied to identify complex interactive relationships between data. Promote multi-party collaboration among governments, enterprises, social organizations, academic institutions, etc., establish a cross-departmental information sharing mechanism, integrate multimodal data resources, build a real-time early warning system, and timely discover and warn potential risks.
Secondly, formulate personalized risk governance plans in combination with the level of development and application of intelligent technology to prevent the concentration of risk stress. Due to the differences in economic and social development and the application level of artificial intelligence technology between different regions, the risk response capabilities are inconsistent, and the cross-regional propagation and evolution of social risks are uneven. It is necessary to formulate personalized risk governance plans in combination with the actual regional development, and reduce local stress through multi-subject and multi-regional collaboration to avoid the structural damage of the social system. Therefore, the risk governance of artificial intelligence technology must first evaluate the application level of artificial intelligence technology in different regions, analyze the risk response capabilities including emergency resources, technical infrastructure, and management capabilities, and identify the strengths and weaknesses of regional risk response. According to the specific conditions of different regions, formulate personalized risk governance strategies. For example, economically developed regions can focus on strengthening technical security and data protection, while economically underdeveloped regions need to improve infrastructure construction and technology popularization. Through policy support and capital investment, promote the application level of intelligent technology in technologically backward regions and narrow the technical gap and risk response capability differences between regions.
Secondly, balance technological development and risk regulation, optimize the total cost of risk governance, and make digital intelligence technology both "controllable" and "active". AI risk governance needs to take into account both "safety" and "development", avoiding both economic and social losses caused by governance failure and preventing the reduction of economic and social benefits caused by excessive governance. The cost of AI risk governance includes many cost elements such as risk prevention costs, social failure costs, and social benefit loss costs when the governance mechanism is effective and ineffective. The intensity of supervision can be adjusted in a timely manner according to the maturity of the technology and the application scenario. In the early stage of technology, loose supervision is adopted to encourage innovation; in the mature stage of technology, supervision is strengthened to ensure safety and standardization. When formulating risk governance strategies, it is necessary to quantify different cost elements, conduct cost-benefit analysis, and select the optimal governance plan to effectively control risks and avoid unnecessary governance costs. At the same time, in order to effectively reduce the cost of risk governance, policy and financial support can be provided to encourage enterprises and scientific research institutions to carry out security technology innovation, and ensure the synchronous development of security measures while promoting AI technology innovation.
Finally, we should proactively assess the social risks that may be caused by the development of artificial intelligence technology, and properly formulate artificial intelligence social risk supervision systems and standards. The rapid development and iteration of artificial intelligence technology is often faster than the formulation and adjustment of policies and laws, resulting in a certain lag in the governance policy standards for artificial intelligence. Therefore, it is necessary to embed the application of artificial intelligence technology into social governance, formulate relevant safety supervision systems and industry standards for artificial intelligence social risks, and ensure that policies and laws can keep pace with technological development while promoting the construction and development of intelligent technology application scenarios. We should regularly analyze the development trends of artificial intelligence technology, conduct comprehensive assessments from multiple dimensions such as technology, ethics, law, economy and society, establish an artificial intelligence risk early warning mechanism, timely discover and predict the potential risks brought by emerging technologies, and study and formulate flexible and forward-looking policies and regulations. In addition, we should actively participate in the formulation of international standards and regulations, establish a transnational cooperation platform, promote cooperation and exchanges among countries in artificial intelligence governance, and jointly respond to global technological risks and challenges.
(Author’s unit: School of Economics and Management, Tongji University)
author:
Text: Shi Qian, Xiao Chao, Photo: Yuan Jing, Editor: Chen Yu, Editor-in-Chief: Yang Yiqi
Please indicate the source when reprinting this article.
Report/Feedback