news

The 12th Internet Security Conference Digital Security Summit was held

2024-08-05

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Source: People's Post and Telecommunications News

On July 31, the 12th Internet Security Conference (ISC.AI 2024) Digital Security Summit was held in Beijing. With the theme of "Building a Big Security Model and Leading the Security Industry Revolution", the summit called on the industry to reshape the security system with big models and safeguard the steady development of the digital economy. The conference brought together many academicians and experts, as well as domestic and foreign corporate leaders such as 360, Huawei, and Microsoft, to deeply analyze the security challenges and solutions brought about by the development of artificial intelligence technology, and jointly explore new paths for the development of digital security driven by big security models.

The security issues of artificial intelligence cannot be ignored, and industry and research urgently need to work together to explore new solutions. The widespread application of artificial intelligence technology is driving the transformation and upgrading of various industries, and at the same time injecting strong impetus into the development of the security industry. Wu Shizhong, an academician of the Chinese Academy of Engineering, said that new security issues brought about by AI technology have become a reality, and the security industry will quickly enter a new era driven by AI. However, large model security research can be said to have just begun. Both security research and the security industry must keep up with scientific and technological progress and application innovation in order to serve and ensure development.

"At present, the security responsibilities and risks of AI application systems are seriously unbalanced." Wu Jiangxing, an academician of the Chinese Academy of Engineering and director of the National Digital Switching System Engineering Technology Research Center (NDSC), emphasized the dilemma facing the current AI system. He proposed that based on the intrinsic security architecture, AI application system security issues can be discovered or corrected. AI application systems must contain the necessary diversity and relatively correct axioms of the intrinsic security structure to find relatively reliable results in untrustworthy processes. In the intelligent era, it is necessary to choose the right technical path to achieve a safer AI application system.

With the rapid development of artificial intelligence technology, all industries and products are being reshaped, and the security industry is also facing a revolution. Zhou Hongyi, founder of 360 Group and chairman of the ISC Conference, said that the essence of reshaping security with AI is to make security "autonomous". 360 first launched the security big model, providing users who purchase 360 ​​standard products with free big model standard capabilities, realizing the universalization of big models and the transformation of new quality productivity in the security industry. As a manufacturer with both digital security capabilities and AI technology, 360 has created the first domestic security big model "360 Security Big Model" based on deep full-stack big model underlying technology, the world's largest security knowledge base, Asia's largest team of senior security experts, and multi-field full coverage and strong practical security scenario capabilities, and has implemented it in many business scenarios such as terminals, security operations, and security services. At present, 360's full-line security products have integrated security big model capabilities, and provide big model standard capabilities for free to all users who purchase 360 ​​standard products.

As generative artificial intelligence drives AI development into a new stage, AI governance has also become a prominent issue. Gong Ke, director of the Haihe Laboratory of Advanced Computing and Critical Software (Innovation) and executive dean of the China Institute for the Development of New Generation Artificial Intelligence Strategy, pointed out in his speech that AI governance should follow the three principles of being conducive to development, serving people, and based on ethics. It is necessary to clearly recognize the shortcomings of the technology itself, as well as the risks of human abuse, misuse, and misuse of AI technology. He emphasized that to prevent AI from "doing evil", it is necessary to establish clear ethical standards, increase transparency, strengthen accountability, improve laws and regulations, and build in security mechanisms. In addition, we must adhere to open innovation and develop a safe and controllable AI ecosystem based on innovation. (Su Deyue)