news

strengthen information protection to prevent deepfake crime risks

2024-09-16

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

recently, news broke in south korea that "about 220,000 perpetrators used deep fake technology to change faces and create obscene content of unspecified women and spread it widely", causing panic in korean society and bringing the social and criminal risks of deep fakes (deep fakes) back into the public eye.
as the research and development of artificial intelligence (ai) continues to deepen, deep fake technology has become more and more mature. its characteristics such as high realism, technological accessibility, low cost, and personal relevance have brought huge use value. from ai "digital people" to film and television image replacement and reshoots, the application prospects of deep fake technology in some industries are becoming more and more extensive.
however, as a sharp "double-edged sword", while enjoying the dividends of deep fake technology, we cannot ignore the huge potential risks that may make criminals more efficient, more concealed, and more harmful to society. at the individual level, deep fake technology can steal the victim's image for profit without the victim's consent, which is an infringement of personal portrait rights. what's more, deep fake technology can also be used by fraudsters for disguise. at the beginning of this year, a financial staff of a hong kong company was defrauded of 200 million hong kong dollars by an impersonated manager. for celebrities and politicians, deep fake technology may cause greater destructiveness and social harm. american actress taylor swift was once synthesized into indecent photos and went viral on social platforms. this is not only a serious damage to her personal and business image, but also disrupts social order and public order. in addition, deep fake technology may also be used to forge evidence, affecting the credibility of evidence such as photos and videos, thereby interfering with judges' judgments and hindering judicial justice.
to prevent the risks of deepfake technology, the first task is to strengthen the protection of personal biometric information. from the perspective of data governance, the process of deepfake production is mainly divided into three links: obtaining data (mainly biometric information data) - processing data - outputting data (outputting fake content). abusing deepfake technology is essentially obtaining biometric information of others without authorization or consent, and then using this information to impersonate others. therefore, to improve the protection mechanism of personal biometric information, we should strictly control the input and output of deepfake information, so as to achieve source governance and end point control.
first, we need to strengthen the "informed-permission" mechanism when collecting biometric information. since personal biometric information has the characteristics of "uniqueness" and "immutability", once it is leaked or abused, it will cause great harm to the information subject and have a long-term impact. therefore, the highest level of personal information protection should be applied. according to the "personal information protection law of the people's republic of china" and relevant laws and regulations, exemption from notification can only be granted in specific circumstances such as national security and public security needs, criminal investigation and judicial procedures. in addition to the above circumstances, any subject shall strictly implement the "informed-permission" mechanism when collecting and using personal biometric information. collection shall not be allowed without explicit consent, and collection is a violation. on the other hand, it is necessary to clarify that the information disclosed by an individual on the internet does not mean that he or she agrees to the reprocessing of personal biometric information. at present, deep fakes have more commercial application scenarios, and it is necessary to dispel the misunderstanding of operators that "online information can be used at will" and fulfill their obligation to inform. in addition, information users must ensure the effectiveness of the notification. it should include specific information such as the user, collection purpose, processing behavior and information flow, and adopt sufficiently significant notification methods, such as pop-up windows with mandatory time limits.
second, improve the platform supervision and tracing mechanism. the platform should strengthen supervision and tracing through a combination of technical review and manual review for areas where deepfakes are heavily used, such as secondary creation of real-life images, such as strict review of whether the secondary creation has the authorization record of the subject of the information being used. if unauthorized or forged authorization certificates are found, the platform should prohibit the dissemination of other people's information and take corresponding restrictive measures such as banning accounts.
third, establish a more serious accountability and punishment mechanism through targeted legislation. regarding the falsification of identity information, my country's criminal law currently stipulates the crime of fraud and impersonation. the elements of the former include impersonating the object as a staff member of a state agency, and the fraudulent content includes property, honor, treatment, feelings, etc.; the latter is aimed at the identity rights and interests in specific procedures such as examinations. the crime of infringing citizens' personal information does not include the situation of "illegal use of biometric information". this means that under the current law, it is difficult to effectively combat the chaos such as deep fake spokespersons from the perspective of criminalization with only the crime of fraud and impersonation. therefore, it is necessary to add new crimes in the future to specifically regulate the act of stealing other people's identities. in this area, the us bill stipulates that "the transfer, use or infringement of other people's identity identification data shall be treated as identity theft", which can provide some reference for my country. in the future, my country's criminal law may also add the crime of "identity theft". anyone who uses other people's biometric information without the consent of the information subject and the circumstances are serious (such as the click rate or forwarding number reaches 500 times) will be treated as identity theft. if other crimes (such as fraud) are involved at the same time, the more serious crime will be dealt with.
a sharp blade needs a scabbard, and a beast needs a cage. deep fake technology does have a good application prospect, but building a complete application, supervision, and punishment system is a prerequisite for it to truly become a handy tool for mankind. to this end, all relevant parties must first establish a system, define its boundaries with a complete legal system, constrain its application with efficient supervision, guide its development with correct values ​​and ethics, achieve the goal of technology serving people, and ensure that deep fake technology is on the right track to benefit mankind. (the authors are researchers at the zhejiang digital development and governance research center and prosecutors at the pingyang county people's procuratorate of zhejiang province)
report/feedback