news

After filling up kerosene, fill up cooking oil? AI can tell the difference! Academicians and experts emphasize that artificial intelligence safety must be “human in the loop”

2024-08-03

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Summary:Artificial intelligence and human society actually have a symbiotic relationship, including auxiliary relationships, human-computer interaction, human-in-the-loop and competitive relationships.


It was revealed that after the tank truck was loaded with kerosene, it did not clean the tank and then loaded cooking oil. Such a "rough" operation method challenges the public's food safety nerves. The "AI Eye" based on artificial intelligence may be able to monitor such violations in a timely manner.

"Utilizing existing and newly added cameras at gas stations and adopting cloud-based video analysis mode, we can identify behaviors during oil unloading, refueling, gas unloading, gas filling, liquid unloading, and liquid filling, and alarm, count, and analyze abnormal actions to establish a smart gas station platform." On the 2nd, the 12th Internet Security Conference Shanghai AI Summit held at the China-Israel Innovation Park in Putuo District revealed that by deploying an AI video intelligent analysis platform on China Telecom's cloud resources and landing in energy companies, it has completed AI analysis of behaviors in the oil unloading area with 300 video streams, realizing tanker truck identification, fire extinguisher identification, oil product detection, and oil pipeline detection.


China-Israel (Shanghai) Innovation Park.

[AI deepfake fraud surges 3,000%]

In fact, AI big models are empowering social governance, helping to solve people's livelihood problems and improving people's sense of security. Wei Wenbo, deputy general manager of the digital business department of China Telecom Artificial Intelligence Technology Co., Ltd., gave an example, saying that, for example, smart catering supervision ensures "bright kitchens and bright stoves", street and village security cares for the elderly and prevents drowning, and smart supervision of garbage disposal and electric vehicle parking is standardized, and even riding an electric vehicle without a helmet can be "distinguished and identified".

However, AI itself is also bound to have security issues. From AI face-changing to AI voice-changing, the 2024 Artificial Intelligence Security Report shows that AI not only amplifies existing cybersecurity threats, but also introduces new threats, causing an exponential increase in cybersecurity incidents. In 2023, AI-based deep fake frauds increased by 3,000%, and the number of AI-based phishing emails increased by 1,000%.

Liu Quan, a foreign academician of the Russian Academy of Natural Sciences and deputy chief engineer of CCID Research Institute, cited a survey of IT industry leaders on large models such as ChatGPT, which showed that 71% of respondents believed that generative AI would bring new risks to corporate data security. In order to prevent the outflow of sensitive data, technology companies such as Microsoft and Amazon have even restricted or banned their employees from using generative AI tools.


Shanghai AI Summit.

[“Molding with Molds” to Ensure Safety in the “Hundred Molds War”]

Multimodal generation, large models emerge. So far, in my country alone, the number of large models that have passed dual registration has reached 117. According to the statistics of the "Beijing Artificial Intelligence Industry Large Model Innovation and Application White Paper (2023)", as of October last year, there were 254 suppliers of "1 billion +" parameter large models in my country. At present, domestic large models with hundreds of billions of parameters have been launched, and the number of neurons in the human brain is also 100 billion. IDC predicts that by 2026, the scale of China's AI large model market will reach 21.1 billion US dollars.

As artificial intelligence enters a critical period of large-scale application, new risks, from privacy leakage, algorithmic discrimination to data bias, also pose severe challenges to the high-quality development of artificial intelligence. He Fan, chief product officer of 360 Digital Intelligence Group, said that in the era of big models, it is necessary to "model by model" to ensure security, that is, to use the security big data accumulated in the past 20 years and add expert knowledge to train a high-level security big model.

Chen Xiaohang, deputy general manager of the Information Network Department of China Telecom Shanghai Branch, also told the Jiefang Daily Shangguan News reporter that in the "Battle of 100 Models", the large models are inherently "blank sheets of paper" and require constant "education" and "training". Therefore, public opinion should be controlled at the source and spam should be filtered out in the process to prevent the large models from being "misled".


ISC Internet Security Conference.

【The competition between humans and machines requires more “humans in the loop”】

As we all know, the "black box" nature of AI models makes its decision-making process difficult to explain, which is particularly important in high-risk fields such as finance and healthcare. Regulators and industry customers need to understand the decision-making basis of the model to ensure the transparency and fairness of their credit scoring models. At the same time, AI models themselves may have security vulnerabilities. For example, hackers can attack through adversarial samples, causing the model to produce incorrect outputs under seemingly normal inputs, posing a security risk.

In the eyes of Academician Liu Quan, the obvious "double-edged sword" characteristics of AI have brought many benefits and huge challenges to mankind. In fact, artificial intelligence and human society have a symbiotic relationship, including auxiliary relationship, human-computer interaction, human-in-the-loop and competitive relationship. Indeed, with the rapid development from weak artificial intelligence to strong artificial intelligence, 90% of manned positions can be replaced by artificial intelligence as "unmanned positions". And the 10% of people who will not lie down must have huge amounts of data and super artificial intelligence operation capabilities.

In the movie "The Wandering Earth 2", the robot MOSS said the concept of "Human In Loop" and also had the idea of ​​"destroying humanity". This further shows that only artificial intelligence iteration led by humans can form a closed-loop system between humans and machines. Including AI security, machines cannot do without human intervention at any time. Liu Quan said that in essence, artificial intelligence cannot replace the human way of thinking, nor can it completely replace humans. Otherwise, the development of AI may lose its original meaning.