news

AI makes rumor-mongering easier and rumors more “scientific”?

2024-07-17

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Source: Legal Daily

Reporter Zhang Shoukun

"Pictures speak louder than words, and this has been verified by experts."

Recently, Tianjin citizen Li Meng (pseudonym) had a heated argument with his mother over a "popular science article": his mother firmly believed that the article contained videos, pictures, and research conclusions drawn by various so-called doctors and medical teams, and it could not be false; Li Meng carefully identified the article and found that it was generated by AI, and the platform also refuted the rumor, so it must be false.

The content of this article is about cats - a girl played with a cat and contracted a terminal disease called "disease", which changed her appearance beyond recognition. It was also because of this article that Li Meng's mother firmly opposed her raising a cat, fearing that she would also suffer from "disease". Li Meng was amused and said, "I really hope my mother can spend less time on the Internet."

Li Meng's mother is not the only one who has been deceived by AI rumors. Recently, public security organs in many places have released a number of cases related to the use of AI tools to spread rumors. For example, the organization that published the false news of "Xi'an explosion" could generate 4,000 to 7,000 false news articles a day at its peak, with a daily income of more than 10,000 yuan. The actual controller of the company, Wang Moumou, runs 5 such organizations and operates 842 accounts.

Experts interviewed by Legal Daily pointed out that convenient AI tools have greatly reduced the cost of rumor-mongering and increased the magnitude and spread of rumors. AI rumor-mongering has the characteristics of low threshold, batch production, and difficulty in identification, and it is urgent to strengthen supervision and cut off the profit chain behind it.

Using AI to fabricate false news

The spread was rapid and many people were deceived

On June 20, the Shanghai police issued a notice that two brand marketing personnel fabricated false information such as "stabbing at Zhongshan Park subway station" in order to attract attention. The relevant personnel have been administratively detained by the police. In the notice, there is a detail that attracts attention: a counterfeiter used AI software to generate video technology and fabricated false information such as a fake video of a subway attack.

The reporter found that in recent years, the use of AI to spread rumors has occurred frequently and spread very quickly. Some rumors have caused considerable social panic and harm.

Last year, in the case of a missing girl in Shanghai, a group maliciously fabricated and hyped rumors such as "the girl's father is her stepfather" and "the girl was taken to Wenzhou" by using "clickbait headlines" and "shocking style". The group used AI tools to generate rumor content and published 268 articles in 6 days through a matrix of 114 accounts, with many articles receiving more than 1 million hits.

The Cyber ​​Security Bureau of the Ministry of Public Security recently announced a case. Since December 2023, a piece of information about "hot water gushing out from underground in Huyi District, Xi'an" has been frequently circulated on the Internet, and rumors such as "the hot water coming out from underground is because of an earthquake" and "it is because of the rupture of underground heat pipes" have appeared. After investigation, the relevant rumors were generated by AI plagiarism.

Recently, "A high-rise residential building in Jinan caught fire and many people jumped to escape" and "An old man doing morning exercises found a living person in a grave near Yingxiong Mountain in Jinan"... These outrageous "big news" have been widely circulated online, attracting a lot of attention. The Jinan Municipal Party Committee's Cyberspace Administration immediately refuted the rumors through the Jinan Internet Joint Rumor Refutation Platform, but many people are still confused by the appearance of "pictures and truth".

A research report released by the New Media Research Center of the School of Journalism and Communication of Tsinghua University in April this year showed that among the AI ​​rumors in the past two years, economic and corporate rumors accounted for the highest proportion, reaching 43.71%; in the past year, the growth rate of economic and corporate AI rumors has reached 99.91%, among which industries such as catering delivery and express delivery are the hardest hit by AI rumors.

So, how easy is it to create fake news using AI?

The reporter tested several popular artificial intelligence software on the market and found that as long as keywords are given, a "news report" can be generated immediately within a few seconds, including the details of the event, comments, follow-up actions, etc. As long as the time and place are added, and it is accompanied by pictures and background music, a news report that is indistinguishable from the real thing is produced.

Reporters found that many AI-generated rumors were mixed with content such as "It was reported", "The relevant departments are conducting an in-depth investigation into the cause of the accident and taking measures to repair it", and "Reminding the general public to pay attention to safety in their daily lives". After being posted online, people often find it difficult to distinguish the truth from the false.

In addition to AI news, popular science articles, pictures, dubbing videos, and imitation voices after face-changing can all be produced using AI. After manual fine-tuning and the incorporation of some real content, they will become difficult to distinguish.

Zeng Chi, a researcher at the Center for Journalism and Social Development at Renmin University of China, said that the splicing nature of "generative AI" is very similar to rumors. Both are "creating something out of nothing" - creating information that looks real and reasonable. AI makes rumor-mongering simpler and more "scientific". AI summarizes the rules and splices the plots based on hot events, and can quickly create rumors that meet people's "expectations" and spread more quickly.

"Internet platforms can use AI technology to reversely detect the splicing of images and videos, but it is difficult to censor the content. People currently do not have the ability to completely block rumors, not to mention that there is a lot of unverified or unverified, ambiguous information," Zeng Chi said.

Fraud to gain traffic for profit

Suspected of committing multiple crimes

The "rumor-mongering efficiency" of some AI software is astonishing. For example, there is a fake software that can generate 190,000 articles a day.

According to the Xi'an police who seized the software, the police extracted articles saved in the software for 7 days and found that the total number exceeded 1 million, covering current news, social hot spots, social life and other aspects. The users of the accounts organized to publish these "news" to relevant platforms, and then used the platform's traffic reward system to make profits. At present, the account involved in the case has been sealed by the platform, and the relevant software and servers have also been shut down. The police are still digging deep into the case.

Behind many AI rumor-mongering incidents, the rumor-mongers' motives mainly come from attracting traffic and making profits.

“Use AI to mass-produce popular copywriting, and suddenly you will become rich,” “Let AI help me write promotional articles, and finish 3 articles in 1 minute,” “Graphic and text creation, AI automatically writes articles, and can easily produce 500+ orders a day, and can operate multiple accounts, even novices can easily get started”... The reporter searched and found that similar “get rich” articles are circulating on many social platforms, and many bloggers are pushing them in the comment section.

In February this year, the Shanghai Public Security Bureau discovered that a short video titled "an artist had a tragic fate and died with hatred" appeared on an e-commerce platform, which triggered a large number of likes and reposts.

After investigation, it was found that the content of the video was fake. After the video publisher was arrested, he confessed that he ran an online store of local specialties on an e-commerce platform. Due to poor sales, he fabricated eye-catching false news to attract traffic to the online store account. He did not know how to edit videos, so he used AI technology to generate text and videos.

Zhang Qiang, a partner at Beijing Yinghe Law Firm, told reporters that using AI to fabricate online rumors, especially fabricating and deliberately spreading false dangers, epidemics, disasters, and police situations, may be suspected of the crime of fabricating and deliberately spreading false information under the criminal law. If it affects the reputation of an individual or a company, it may be suspected of the crime of defamation and the crime of damaging business reputation and business reputation under the criminal law. If it affects stock securities and futures trading and disrupts the trading market, it may be suspected of the crime of fabricating and spreading false information about securities and futures trading under the criminal law.

Continue to improve rumor-refuting mechanisms

Clearly label synthetic content

In order to govern the chaos of AI fraud and deepen the governance of the network ecology, relevant departments and platforms have introduced a number of policies and measures in recent years.

As early as 2022, the Central Cyberspace Affairs Commission and others issued the "Regulations on the Management of Deep Synthesis of Internet Information Services", which stipulates that no organization or individual may use deep synthesis services to produce, copy, publish, or disseminate information prohibited by laws and administrative regulations, and may not use deep synthesis services to engage in activities prohibited by laws and administrative regulations, such as endangering national security and interests, damaging the national image, infringing on social public interests, disrupting economic and social order, and infringing on the legitimate rights and interests of others. Providers and users of deep synthesis services shall not use deep synthesis services to produce, copy, publish, or disseminate false news information.

In April this year, the Secretariat of the Central Cyberspace Affairs Commission issued a notice on launching a special campaign to "clear up and rectify the unscrupulous "self-media" traffic-seeking activities", requiring the strengthening of information source labeling and display. Information generated using AI and other technologies must be clearly marked as being generated by technology. Information containing fictional or deductive content must be clearly labeled as fictional.

For content suspected of using AI technology, some platforms will post a reminder at the bottom saying "Content suspected of being generated by AI, please carefully identify" and clearly add a fictional label to content that contains fictional or deductive elements, and take measures such as "blocking" for illegal accounts. Some big model developers also said that they would watermark the content generated by the big model through background settings to inform users.

In Zhang Qiang's opinion, people do not have enough understanding of generative AI and lack experience in dealing with it. In this case, it is very necessary to remind people to be careful to discern AI information through the media. At the same time, it is necessary to increase the response at the law enforcement level and promptly investigate and correct the rumors and frauds made through AI.

Zheng Ning, director of the Law Department of the School of Cultural Industry Management at Communication University of China, believes that the existing rumor-refuting mechanism should be further improved. Once a piece of information is identified as a rumor, it should be marked immediately and pushed again to users who have viewed the rumor to provide a rumor-refuting prompt to prevent the rumor from spreading further and causing greater harm.

It is worth noting that some people may not have the subjective intention to spread rumors, but simply post AI-synthesized content on the Internet, which results in it being forwarded in large quantities, causing many people to believe it, thus causing harm.

In this regard, Zeng Chi believes that the simplest way to prevent it is to formulate regulations through relevant departments or platforms, and it must be marked in all AI synthetic content that "this picture/video is AI synthetic."