2024-08-17
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
【Text/Observer.com Liao Yiheng】
With the advent of the AI era, this year's US election is also regarded by some scholars as the first "large-scale use of artificial intelligence to influence voters' elections." Many events in this election also confirm the concerns of many people in the United States that the intervention of large language models in elections will have a negative impact on the American democratic system. If the AGI deep fake recording incident in the Slovak election in 2023 was just one of the first cases of AI intervening in the election, then the ongoing US election can be regarded as a big fight involving AGI in the election.
In early August, Trump accused Harris of "being faked by AI when thousands of people were waiting for her when she got off the plane at the Detroit rally." Although this was denied by Harris' team, investigative teams from several media outlets, including Reuters, also cleared Harris' name, saying that the huge fanfare at Harris' rally was indeed true. However, it also shows that the American people are already very nervous about the involvement of AI technology in elections.
A group of people were waiting at Romulus Airport in Michigan for Vice President Kamala Harris and Minnesota Governor Tim Waltz to attend a campaign rally on August 7. Donald Trump had been furious about the number of people at the rally and made false accusations about photos of the event. Internet photo
Coincidentally, on July 26 this year, Musk shared a deep fake video of Harris on his own platform X, with the caption "This is amazing" and a laughing emoticon. In this video, there are many previous campaign advertising materials of Harris' campaign team, and a narration of Harris that is enough to provoke gender and racial conflicts in the United States: "I am a woman and a person of color. If you criticize anything I say, then you are sexist and racist."
The video received over 100 million views in three days and caused a sensation. In fact, the video was not shot by Harris himself, but synthesized by generative artificial intelligence, and accompanied by a synthesized Harris voice, which was quite realistic. However, Musk did not mark the video as fake in his post.
Harris' campaign team expressed serious protest and emphasized in a statement: "This incident is a good example of how realistic images, videos or audio clips generated by artificial intelligence are being used to make fun of and mislead politics as the US presidential election approaches."
Musk reposted a video on his social media X. The video was synthesized by Generative Artificial Intelligence (AGI) and combined with some real campaign information, making it very realistic. Internet image
In fact, the plot of AI technology intervening in the US election has been unfolding since the beginning of this year. As early as when Biden had not yet withdrawn from the election, there was an incident in which "AI Biden" disrupted the New Hampshire primary. According to relevant reports, some voters received calls from Biden's synthetic voice on January 21 this year, asking voters to save their votes for the final election in November, and claiming that if they participated in the party primary, they would not be able to participate in the final election later. Afterwards, Hani Farid, a digital forensics expert at the University of California, Berkeley, confirmed that the telephone voice was actually forged by "relatively inferior" artificial intelligence technology.
Then, on February 25 of this year, Steve Kramer, political adviser to Democratic presidential candidate and Congressman Phillips, admitted that he had hired Carpenter and asked Carpenter to use AI software to create Biden's voice and plan the phone call incident. The call to voters was just to "remind the public to guard against misleading applications of AI."
Magician Carpenter described to the media how he was asked to make the "fake Biden call" audio. Internet photo
The AI competition between the two parties in the US presidential election has already begun. The Democratic Party has also witnessed the Republican Party's bold and sophisticated experience in the use of AI.
In March of this year, photos of Trump and black voters continued to appear on American social media. The investigation team of the BBC documentary program Panorama found that the common feature of these pictures is that they all portrayed the black community as supporting former President Trump and implied that everyone would vote for the Republican Party. These pictures actually promoted a strategic narrative: Trump is now very popular in the black community. It should be noted that black voters are the key to Biden's victory in the 2020 election.
Of course, these deep fakes were eventually discovered, but during the dissemination process, these deep fakes had no watermarks or annotations to indicate their unreal nature. Although some careful netizens can see that these photos are distorted in gloss and texture, not everyone has enough energy and judgment to discern.
Trump embraces a group of black women. The photo was later confirmed to be a deep fake created by generative artificial intelligence. Internet photo
After investigation, it was found that some of these photos came from accounts that satirized Trump, but were widely circulated after being reposted, while others were generated by Trump's own fanatical supporters. The creator of one of the pictures told the BBC: "I didn't say it was a real photo." This answer is quite frustrating, because before AGI generated deep fake photos, most people defaulted to the judgment of "seeing is believing".
In addition to the false information spontaneously spread by Trump's fanatical followers, the Trump campaign team itself is also increasing its attention and investment in the field of AI. Campaign financial records show that the Trump team, the Republican National Committee and its related fundraising committees paid more than $2.2 million to related companies such as Campaign Nucleus, owned by Trump's former campaign manager Pascal. Campaign Nucleus's business includes using AGI to help generate customized emails, analyze a wide range of data to measure voter sentiment, find swayable voters, and amplify social media posts by "anti-awakening" influencers. The focus is on using AI technology to analyze supporters of political tasks to build portraits and seek manipulation of voter preferences through personalized solutions.
It is worth mentioning that some technology leaders have also changed their political stance and started to support Trump. This seems to be a two-way race. Among them, the iconic figure is Elon Musk. After the Trump shooting incident, Musk officially announced on his social platform X that he supports former President Trump's candidacy and is willing to provide funds to support his campaign.
Although the behavior of some Silicon Valley leaders who changed their normal behavior and supported the Republican Party is directly related to the drawbacks of diversity policies, the broader reason is their consideration of the future of the technology industry. In short, factors such as funding and communication platforms are deeply coupled with the support of AGI technology and have exerted an unexpected force.
The secret of a tightly woven black box
Having seen AGI’s unprecedented ability to muddy the waters in elections, we can’t help but ask: What is the mechanism by which AGI intervenes in elections?
Deepfakes generated by AGI include text, audio, pictures and videos. These elements are deeply wrapped up in the information and cognition elements in election propaganda, and in fact have been deeply bound to the political communication model. AGI deeply embeds itself in the election process, challenges voters' cognition, creates two effects of reinforcement and shaking, and achieves the purpose of manipulating voters.
US election propaganda analysis framework Sun Chenghao: The intervention of generative artificial intelligence in US election propaganda: paths, scenarios and risks, 2024-7. [3]
The election propaganda mechanism in the United States mainly consists of three stages: input, processing, and output. Candidates and political parties conduct campaigns to win elections, which is essentially an activity to persuade voters and let them process and digest the content of political propaganda, ultimately achieving the goal of getting voters to make choices.
With the deep empowerment of AGI, the processes in the field of US election propaganda, including voter registration, voter data analysis, election forecast analysis, election strategy formulation, election process tracking, publicity and voter assistance related election resources, have ushered in a new form.
In terms of predicting elections, analyzing voter data, formulating election strategies, and assisting voters, AGI is currently performing generally on a benign track. Specifically, it helps campaign teams and decision-makers analyze elections more quickly and carefully and generate charts in real time, helps candidates analyze voter profiles and formulate in-depth segmentation strategies, and improves the quality of interaction between candidates and voters through automatic email replies and other methods to enhance feedback effects, and acts as an election encyclopedia to provide voters with necessary and timely election information.
However, in other areas, the deep application of AGI has obviously exposed some problems. In voter registration, the original intention of using AGI was to increase voter turnout by actively generating emails or making phone calls. However, the "AI Biden phone call" incident at the beginning of the year has proved that AGI can also "shine" in disrupting voter voting.
In terms of election process tracking and propaganda delivery, the positive significance of AGI empowerment lies in identifying abnormal activities in the election process, preventing online fraud and cyber attacks, maintaining election order, and identifying segmented groups to accurately deliver election information to these voters. However, in the Trump black photo incident and the previous Slovakia recording incident, we can see that AGI played the opposite role, and maintaining order and accurate delivery turned into disrupting order and deep false delivery.
The damage caused to voters' cognition by the campaign ecology disrupted by AGI is mainly reflected in two aspects: shaking voters' cognition and strengthening voters' cognition.
In terms of the shaking effect, in the fake photo incident of Trump and black people, AGI helped generate a series of deep fake photos in order to win over the black community, especially the young black community. Through AGI's precise positioning, these deep fake photos were quietly released to voters, challenging the previous cognition of the black voter community and shaking up the hesitant young voters.
In terms of strengthening effects, for propaganda information itself, the speed at which AGI generates information can bring the election team the advantage of massive information bombardment, and even the quality of copywriting can more accurately grasp the "pain points" of voters and attract their donations. In addition, the anthropomorphism of AGI, such as politician robots, can help political parties get closer to voters and achieve efficient communication and feedback.
It is worth noting that AGI’s flexible reinforcement effect and ability to generate massive amounts of information in a short period of time and accurately push it out will be rapidly amplified during specific periods of political communication, which can easily lead to the explosive spread of deep false information. These situations often occur at the beginning of an election and during the election silence period when various propaganda entities have not yet entered the field or have been forced to clear out of the propaganda field.
The mechanism by which information processed by generative artificial intelligence undermines voter cognition Sun Chenghao: The intervention of generative artificial intelligence in US election propaganda: paths, scenarios and risks, 2024-7. [3]
In addition, some AGIs have underlying ideologies. For example, the famous GhatGPT was pointed out to have a clear left-wing ideological stance in a test. Neil Postman once mentioned a point in his analysis of media technology: "Media is metaphor." He believes that media itself has powerful implications that can change people's way of thinking and redefine reality. As early as Trump's "Twitter election" and "Twitter governance" period, we felt the power of media to shape voters. Now that AGI has strongly empowered the media platform, this metaphor itself can easily be infinitely magnified.
Towards a Cyber Election
In March this year, experts from the Brookings Institution held an offline discussion on the risks posed by artificial intelligence and false information during elections. They believe that the risks posed by AGI to elections today are concentrated in three aspects: legislation, technology, and the deep integration of communication mechanisms.
Video screenshot of the seminar: from left to right: Darrell M. West (Senior Researcher at the Center for Technology Innovation), Soheil Feizi (Associate Professor of Computer Science at the University of Maryland), Shana M Broussard (Commissioner of the Federal Election Commission), and Matt Perault (Director of the Technology Policy Center at the University of North Carolina at Chapel Hill)
On the legislative level, the U.S. federal government is clearly not ready to legislate on AGI's interference in elections, and most of the responsibility still falls on state legislation and major private media platforms.
On October 30, 2023, President Biden signed Executive Order No. 14110 on “Safe, Reliable, and Trustworthy Development and Use of Artificial Intelligence”, which is the most comprehensive approach to AI governance in the United States to date, covering areas ranging from new industry standards for AI safety to privacy protection, civil rights, worker interests, technological innovation, government use of AI, and America’s international leadership.
To some extent, this executive order can be seen as a roadmap for future legislation in the field of AI safety in the United States, helping to avoid repeating the previous fragmented governance trajectory and returning regulatory work from scattered proposals in various states to systematic management by the federal government. However, this "comprehensive artificial governance method" is still in its infancy, and the problems caused by AGI in domestic elections are already menacing.
On the technical level, although some people have proposed to identify deep fake generated content through digital watermarks, experts say this is not reliable. These watermarks can be easily erased during the AGI generation process, and this is a limitation at the basic technical level and is difficult to solve in the short term. The fastest way to achieve results is to hope that the private sector can take the initiative to provide some review services. Although it is not enough to solve the problem in the face of legislative improvement, it can at least play a role.
In terms of the impact on the dissemination mechanism, experts at the meeting said that false information actually affects the 5% to 10% of voters who are in a state of uncertainty, while the vast majority of voters have already decided on their camp at the beginning. In this regard, other experts also said that we don’t need to be too pessimistic about the emergence of deep false information:
First, fake news accounts for only a small portion of the information that ordinary people receive. "A 2020 study found that for the average American, about 7.5 hours of media time is spent every day, of which about 14% is related to news, mostly from television. Another recent study estimated that for the average American adult Facebook user, even in the months before the 2020 US election, less than 7% of the content they saw was related to news."
Second, for the deep fake information that does exist, research shows that it is mainly concentrated among a small number of Americans. This may be related to the information cocoon created by the algorithm, but these deep fake information usually cannot be transmitted to the vast majority of the online public, especially after AGI helps to deeply divide the user groups, the user base corresponding to specific information has become smaller.
In addition to strengthening federal legislation and pushing technology companies to jointly control AGI, promoting "disclaimers", requiring campaign teams to increase transparency in campaign activities, and strengthening public education have also become solutions being discussed.
The solution to promote the "disclaimer" came from Senator Krol Malovsky of both parties. However, the problem is that if a disclaimer is marked on a content whose authenticity is unknown, people will be more inclined to distrust the content, thereby weakening the credibility of all information with this mark and the effect of related propaganda, even though the marked content may be true and credible.
The method of requiring campaign teams to increase transparency and proactively disclose information is to ask them to proactively disclose when and how the AI system is used. However, this method currently lacks sufficient trust and assurance, because although this plan puts the responsibility on the candidates themselves, if the candidates themselves do not want to take responsibility, are unwilling to proactively disclose information, or even deliberately hide their behavior, then collecting evidence will be very costly, especially since there are certain technical barriers.
In addition, if it is just a fine or some minor administrative punishment, the candidates may be prepared to be blamed. After all, once the benefits of the incorrect application of AGI are greater than the cost of the punishment, it undoubtedly gives the candidates an opportunity to "legally" pay the price to use the big killer. However, faced with the temptation of winning the presidential election, how much chips should the legislature put on the cost of breaking the law to make people afraid?
At the same time, we must also see that in Trump’s fake photo with black people and Biden’s AI phone call incident, the initiators of the behavior were not the candidates themselves (at least not on the surface), so the division of responsibilities will also be involved. After all, putting the responsibility on thousands of supporters will not have much deterrent effect.
Public education is a quick and effective solution, although it cannot completely solve the problem. Not everyone in the public has the ability to digest deep false information, especially when deep false content is a mixture of truth and falsehood. Regular and active education of voters during the election period, telling them how to identify deep false content and how to process information objectively, will be of great help to voters who are trapped in information cocoons or have a low level of education.
Regardless, the era of elections with deep AGI integration has arrived. How to distinguish true from false information, how to guard against opponents, and how to establish regulatory mechanisms and evaluate the effectiveness of public policies have become challenges that both parties have to face in this election.
Notes:
[1]AP:A parody ad shared by Elon Musk clones Kamala Harris’ voice, raising concerns about AI in politics.
https://apnews.com/article/parody-ad-ai-harris-musk-x-misleading-3a5df582f911a808d34f68b766aa3b8e
[2] The Paper: As the US election approaches, the dilemma of false information under the game of AI
https://www.thepaper.cn/newsDetail_forward_27359556
[3] Sun Chenghao: The intervention of generative artificial intelligence in US election propaganda: paths, scenarios and risks, 2024-7, P4-5.
[4]The Brookings:The dangers posed by AI and disinformation during elections,2024-3
https://www.brookings.edu/events/the-dangers-posed-by-ai-and-disinformation-during-elections/
[5]The Brookings:Regulating general-purpose AI: Areas of convergence and divergence across the EU and the US,2024-5
https://www.brookings.edu/articles/regulating-general-purpose-ai-areas-of-convergence-and-divergence-across-the-eu-and-the-us/
[6]The Brookings:Misunderstood mechanics: How AI, TikTok, and the liar’s dividend might affect the 2024 elections,2024-1
https://www.brookings.edu/articles/misunderstood-mechanics-how-ai-tiktok-and-the-liars-dividend-might-affect-the-2024-elections/
This article is an exclusive article of Guancha.com. The content of the article is purely the author's personal opinion and does not represent the platform's opinion. It cannot be reproduced without authorization, otherwise legal liability will be pursued. Follow Guancha.com WeChat guanchacn to read interesting articles every day.