news

using ai to fight ai, how far has the offensive and defensive relay of face-changing and anti-counterfeiting come?

2024-09-14

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

seeing is not necessarily believing! a "portrait photo" or a "person video" may become a new means and tool for criminals to commit crimes. during the 2024 national cyber ​​security publicity week, the topic of ai face-changing, voice-changing and other infringements on personal privacy once again became the focus of everyone's attention. in fact, during the bund conference held a week before the cyber ​​security week, this topic was also hotly discussed.
from the bund conference on the banks of the huangpu river to the national cyber ​​security week on the banks of the lingdingyang, this september, people's questions about how ai can be used for good continued, and thoughts on how to combat the risks of ai face-changing resonated in the two riverside cities of shanghai and guangzhou.
how does ai face-changing technology affect the security of our digital identities? what effective ai tools can accurately detect and prevent deepfake risks? at the legal and ethical level, how do we balance the innovative application of ai face-changing technology with its potential risks? with these questions in mind, nandu reporters visited various places, from simulated attack and defense challenges to the implementation of actual application scenarios, from academia to industry, to find answers to these questions.
beyond the testing ground: a group of young people started the ai ​​attack and defense relay
ai face-changing is a deep fake technology that uses deep learning algorithms to tamper with facial images. with the continuous iteration of ai face-changing technology, while improving the production efficiency of industries such as entertainment and e-commerce, it also brings endless risks. in early september, a new "nth room" case using "deep fake" technology appeared in south korea. korean women fell into "ai face-changing panic" and even posted for help on chinese social networks.
at the same time, on the bund in shanghai, a deepfake attack and defense challenge about ai face-changing gathered a group of young "ai counterfeiters". during the bund conference, one of the participating teams in this ai innovation competition·global deepfake attack and defense challenge (hereinafter referred to as "deepfake attack and defense competition"), visionrush from the institute of automation of the chinese academy of sciences, open-sourced the participating ai model in response to the "ai face-changing panic" of korean women. subsequently, other participating teams also "took over" and started the deepfake defenders "code relay" action. participating teams including the chinese academy of sciences, the university of macau, and the ocean university of china announced the open source competition model, hoping that this move will lower the technical threshold, strengthen technical exchanges, and curb the abuse of deepfake technology.
during the bund conference ai innovation competition and global deepfake attack and defense challenge, participating teams launched the deepfake defenders "code relay" action.
it is worth mentioning that the joint team jtgroup composed of scholars from hong kong and macau performed well in the competition and won the championship in the image track of the deepfake attack and defense competition. wu haiwei, a postdoctoral fellow from the city university of hong kong, participated in the image track with his juniors from the state key laboratory of smart city internet of things at the university of macau. chen yiming, a researcher at the state key laboratory of smart city internet of things at the university of macau, is the captain of this championship team. when talking about open source, he laughed in an interview with nandu reporters, "our research is carried out standing on the shoulders of 'giants'. these 'giants' are very willing to selflessly contribute their program codes. why don't we do the same? our team itself adheres to the concept of technology for good to do research and development work. we participate in such competitions in the hope of applying technology to some things that the public needs for technology for good."
jtgroup team achieved a 97.038% recognition rate for deepfakes in multiple types and scenarios. how did they do it? chen xiangyu, a representative of the team, said in an interview with nandu reporters, "this attack and defense competition is close to the data scale of the real world. we learned a lot of new technologies about deepfake generation and recognition in a very short period of time, and also paid attention to timely adjustment of the details of our algorithm model. this may be one of the reasons why we won the championship."
nandu reporter learned from the organizer that the deepfake attack and defense competition is one of the most authoritative competitions in the field of cv (computer vision) and is set by ant digital zoloz and dimensity lab. the competition data set consists of public data and forged data. among them, the forged image data covers more than 50 generation methods in real scenes, and the forged audio and video include more than 100 combined attack methods. the total amount of training data sets opened by the organizing committee exceeds 1 million.
"this competition provides technical researchers with an opportunity to practice in a highly simulated real industrial environment, which is conducive to the integration of industry, academia and research, and the cultivation of responsible practical talents." zhou tianyi, deputy director of the frontier artificial intelligence center of the agency for science, technology and research of singapore, co-organizer of the competition, said. going out of the test field and applying the technological achievements of research and development to actual scenarios is also the goal of the participating teams. wu haiwei of the jtgroup team told nandu reporters that "ai-assisted detection algorithms have been applied in the qualification review stage of taobao, tmall and other platforms to prevent criminals from forging business licenses and identity verification information to apply for opening stores. compared with purely manual review, it can reduce costs and increase efficiency. in the future, we also hope that the research results of our laboratory can be applied to more corporate and government scenarios, such as in national anti-fraud apps and other programs, to make some contributions to deepfake anti-fraud and technology for good."
standards and applications: building a protective network against the risks of ai face-changing
of course, the fight against deepfake is far more than a competition. the industry has responded one after another, from standards to applications, to jointly build a protection network against the risks of ai face-changing.
a reporter from nandu experienced the deepfake detection interactive equipment of ant group's dimensity laboratory at the bund conference technology exhibition.
at the shanghai bund conference technology exhibition, ant group's tianji lab demonstrated an interactive device for deepfake detection. according to the experience of a nandu reporter, after taking a real-time photo, deepfake generated several different "nandu reporter" images (as shown above). then the detection arm of ant group's tianji lab detected and judged each image one by one, and accurately identified that only the last one was a real nandu reporter portrait, and the rest were all deepfakes.
at the same time, the "technical specifications for false digital face detection in financial applications" standard was also officially released at the conference. the standard specifies the functional requirements, technical requirements, performance requirements, etc. for false digital face detection services in the financial field, and proposes corresponding testing and evaluation methods. it is understood that this is the first "ai face-changing" detection standard in china for financial scenarios. liu yong, secretary-general of the zhongguancun financial technology industry development alliance and dean of the zhongguancun internet finance research institute, said in his speech: "the release of this standard provides a basis for the security detection and evaluation of false digital faces in financial scenarios, and also fills the gap in this field."
in nansha, guangzhou, at the "personal information protection sub-forum" of the 2024 national cyber ​​security publicity week, 15 enterprises and institutions including nanfang media group, the fifth institute of electronics of the ministry of industry and information technology, the guangdong branch of the national internet emergency center, pengcheng laboratory, guangdong telecom, guangdong mobile, tencent, huawei, ping an group, china guangfa bank, oppo, byd, sangfor, shenzhen cesi, and jingyuan security announced the official launch of the "guangdong provincial network data security and personal information protection association" and jointly issued the "promoting network data security and personal information protection" initiative. it is understood that the risk of ai face-changing will also be one of the topics of concern to the association.
china mobile ai face-changing and real-time authentication interactive zone. (source: 2024 national cyber ​​security publicity week)
at the cybersecurity expo during this year's cybersecurity week, major security companies also showcased new products and technologies in the security field. in china mobile's deep synthesis face swap real-time counterfeit detection interactive zone, nandu reporters experienced the ai ​​face-changing simulation system. during the experience, the video generated by deepfake can be accurately identified as a forgery by real-time counterfeit detection technology. the whole process takes only about 5 seconds. wang xiaoqing, deputy manager of china mobile's information security and management operation center, introduced that the system can counter different scenarios such as ai face-changing, ai voice-over, and document forgery. they have carried out pilot applications in the real-name network access business. based on the counterfeit detection capability, they can detect ai faces in the real-name verification process in real time, and actively prevent the risks of "ai face-changing" online card opening and card replacement, and the accuracy rate of counterfeit detection is over 90%.
indeed, how to let ordinary people enjoy the protection brought by deepfake anti-counterfeiting technology is an important proposition for current technology research and development. zhou wenbai, a professor at the school of cyberspace security of the university of science and technology of china, emphasized this point in an interview with nandu reporters. the intelligent cognitive security laboratory of the school of cyberspace security of the university of science and technology of china, where he works, has taken a new step. just in early september, the world's first device-based ai face-changing anti-fraud detection technology developed by the laboratory in cooperation with honor has been applied on honor mobile phones. after the user clicks to start detection (start detection), the technology will detect the face in the video and give the possibility of being replaced by ai. when the possibility is too high, it will remind the user that a suspected face swap has been detected (suspected face swap detected). it is understood that as early as 2019, the intelligent cognitive security laboratory of the school of cyberspace security of the university of science and technology of china open-sourced a deep synthesis tool called deepfacelab (a tool for video face swapping using deep learning); in 2020, the team participated in the world's largest "deep fake detection challenge (dfdc)" organized by facebook, mit and others, and won the runner-up.
what problems remain to be solved when using ai to defeat ai?
from attack and defense competitions to industry applications, the efficiency and practicality of using ai to "eliminate the fake and retain the real" is self-evident. however, what cannot be ignored is that the risks are still rising exponentially. yao weibin, the creator of the global deepfake attack and defense challenge and technical director of ant zoloz, said in an interview with nandu reporters that the number of attacks generated by aigc this year has increased by about 10 times compared to last year. similar situations have occurred not only in china, but also in south korea, the philippines, indonesia and other countries.
despite the emergence of many detection technologies, we still seem unable to completely avoid the risks brought by deepfake. why is there such a situation? zhou wenbo told nandu reporters that it is not a process for a technology to go from theoretical research to practical use, and it will encounter various challenges before it is implemented. "in fact, i think the most important thing is not how perfect the technology should be. because if you really want to solve a problem, only 5% needs to rely on technology, and the remaining 95% needs to rely on the improvement of laws, regulations and implementation standards." he pointed out that popular science participation by media, content dissemination platforms and other entities is very necessary, and the public also needs to have a stronger sense of security precautions.
zhang bo, a member of the artificial intelligence security governance committee of the china cyberspace security association, also suggested in an interview with the media that "ai face-swapping is usually accompanied by scenes of transaction fraud, such as video calls. pay attention to the details such as the facial contours and light background of the other party's video to see if there are any abnormalities. if necessary, you can ask the other party to quickly raise their head, nod, or turn their head to further determine whether there are any abnormal details in the video to confirm that the video is real."
from a technical perspective, what other directions can be used to improve the accuracy of deepfake detection? zhang bo believes that identity authentication and digital watermarking can be combined to increase the difficulty of forgery. zhou wenbai told nandu reporters that there are two aspects to consider. one is to provide the detection model with more high-quality deepfake data of different types. just like people learn knowledge, deep learning models are also like this. the more data they have seen and the more knowledge they have learned, the stronger their recognition ability may be. "the other is to try to do some active protection at the data publishing end. for example, before our data is published on the social platform, the platform should be marked with robust traceable identification information (such as watermarks) in some way. this identification information can be spread along with the media material. by detecting the integrity, semantics and other contents of this identification information, it can be judged whether the current material has been damaged or forged, etc." zhou wenbai said. it is worth mentioning that many video platforms, represented by douyin, now have annotations for videos suspected of being generated by ai.
zhang bo also suggested that from the management perspective, it is necessary to speed up the formulation and implementation of relevant laws, regulations and regulatory measures to regulate the use and application scenarios of ai applications. in zhou wenbo's view, the release of standards such as "technical specifications for financial applications of false digital face detection" can serve as a guide for regulating enterprises and institutions to provide services and apply technology. it can implement its technology in accordance with certain specifications and standards to prevent the technology from posing threats to privacy, data security, etc. during use, resulting in damage to the rights and interests of users and even infringement of national interests.
from the perspective of talent cultivation in related fields, there are also opportunities and challenges. wang yaonan, an academician of the chinese academy of engineering and president of the chinese society of image and graphics, pointed out that in recent years, there have been many malicious uses of this technology for ai face-changing fraud abroad, resulting in economic and property losses, reputational damage and other incidents. in the face of global technical challenges, it is urgent to cultivate ai talents with practical capabilities. ant digital cto wang wei proposed a solution, hoping to "promote innovation through competition" and improve the ability of participants to use technology to solve real problems in actual application scenarios through down-to-earth competition questions, and cultivate interdisciplinary talents; by gathering global ai technology elites, they can communicate and improve the global confrontation level in actual combat exercises.
written by: southern metropolis daily reporter xiong runmiao
report/feedback