As millions of people around the world head to the polls in 2024, there have been many warnings that we could face unprecedented artificial intelligence (AI)-based efforts to mislead voters. So far, I'm mostly seeing satire.
In late 2023, Karen Rebelo of Indian fact-checking outlet BOOM Live first stumbled upon a video that appeared to be AI-generated. Voice Clone Politician. She wasn't sure what to do. As part of her fact-checking job, she “exposes” fake facts, but she can't expose facts based on her gut.
The video was poorly produced, but the audio sounded so real that it was disturbing. “It was eerily similar to their voices. It didn’t sound robotic in any way,” Rebelo said.
It took months to finally find an expert to test and confirm her suspicions. They found that a popular AI tool that allows users to upload voice samples to generate text-to-speech was likely used.
Rebello was worried that we would see a huge amount of false AI-generated content ahead of India's general elections this spring. But surprisingly, she was wrong. At least until now.
Is it a deepfake or just an ugly cartoon?
As of 2024, AI tools have already been used to spread fake election endorsements, bot comments, and calls for election boycotts. Nonetheless, experts say we still rarely see so-called deepfakes, which are indistinguishable from real videos or can have worrying consequences.
Sophie Murphy-Byrne, senior government affairs manager at Logically, an AI-based company that fights disinformation, said her team conducted 224 fact-checks related to the Indian elections, but only 4% of those were on AI-generated content. “We found that they were using mostly cheap fakes rather than deepfakes,” Murphy-Byrne said.
word “Deepfake” refers to: AI-generated but realistic video, images and audio that convincingly mimic the appearance of real people. However, “cheap fakes” refer to altered media, but using simple and easily accessible methods, such as speeding it up, slowing it down or cutting out parts of it.
If you upload a lot of your own photos, videos, and audio online, you can also use deepfakes to imitate yourself. However, there are great concerns that it will not only be used to ridicule experts, politicians, and leaders of major institutions, but also to spread conspiracy theories, sow distrust, and undermine democracy.
Rest of the World, a media outlet focusing on non-Western countries, tracking Examples of AI being used in elections. Russell Brandom, one of the editors on the project, has noticed that so far AI is being used to create content similar in nature to ugly political cartoons that are mainly used for trolling.
“It’s hard to say they’re trying to trick anyone,” Brandom said.
Using AI today is worthwhile. It has a novel look and attracts attention. “I think there’s also the thrill of transgression,” he added.
Distinguish between real and fake
Politicians' faces are often inserted into memes or popular movie scenes that viewers can easily recognize.
In India A scene from a Bollywood movie The scene of a man hanging from a cliff and letting go of his hand has been used to depict political betrayal. Narendra Modi, A person who became the Prime Minister of India 3rd term in June, was exchanged A musician walks on stage with American rapper Lil Yachty. Another rapper, Eminem seems supportive South African opposition at an altered clip. This content does not imply to Brandom that anyone would believe it or change their voting choice because of it.
Directed by Jānis Sārts NATO Strategic Communications Center of ExcellenceA research institute came to a similar conclusion. Like Rebelo, he was surprised at how little it was used, noting that it was often used for satire.
But just because something is fun doesn't mean it's harmless. Some used by experts The term “hahaganda” is used to describe the use of humor in propaganda. “Humor can be very powerful. It can be one of the best ways to overcome communication barriers,” Sartz said.
“The constant flow of mostly harmless AI-generated content still has the potential to create an infodemic,” Murphy-Byrne said. Thinking critically and distinguishing truth from fiction is becoming increasingly difficult, she said. During a pandemic, misinformation spreads like a virus.
Sārts estimates that about 80 percent of AI-generated political content is not intentionally misleading, but there is another 20 percent.
Spam and voter discouragement
The Rest of the World directory lists cases where the intent seems more sinister than the execution. Chinese Spam campaign AI images were used to cast doubt on the fairness of the US election and portray one of the candidates, President Joe Biden, in a negative light. In Bangladesh, AI videos were used to create fake election withdrawal of the candidate. call from pakistan boycott election It spread.
There are other country cases that Rest of the World does not focus on. robocall AI audio recording used to prevent people from voting in the US state of New Hampshire In Slovakia The candidate's remarks discussing election rigging were released two days before the public voted.
For now, Rebelo thinks other languages are a major obstacle for AI propaganda creators. India has 22 officially recognized languages. The technology has been most advanced in English. But Rebelo says that as the tools to create it become more widely available, more authentic-looking content is likely to be used in other languages as well.
She said there was a huge misinformation problem in India and she couldn't see why the political actors spreading it would draw the line at AI.
“They are looking for every technological advantage they can get, because this is how they have always traditionally behaved.” Rebello claims. “AI technology is getting better and better. There is only one direction to move. It is becoming more and more sophisticated.”
Making deepfakes convincing is hard.
Sārts said the technology exists to create propaganda deepfake videos, but it still takes knowledge and resources to do so.
“There are many simpler ways.” Sartz said. “People spreading disinformation haven’t yet learned how to use AI well enough.” The U.S. election in November could be a big test, he said.
Murphy-Byrne also believes the barriers to using such technology are lowering. She said Russia spent millions of dollars spreading propaganda messages during the 2016 U.S. election campaign.
“With these new AI technologies, the execution of such campaigns no longer depends on large, skilled and well-organized teams,” she said. “Now these kinds of campaigns can be created by the average individual in their bedroom for as little as $400 using open source technology,” she said in a recent report. Web seminar.
Sārts and Murphy-Byrne both highlight the dangers of personalization, where AI can generate content on a given topic from a variety of perspectives, tailoring it to the beliefs of different people.
concerns about the future
Murphy-Byrne also said another risk we could see is “area flooding.” This means creating too much content, making information difficult to track and creating mistrust.
There are simpler ways to mislead AI. Rebelo and Brandom warn that advances in AI technology could be used as a cover to claim that real but unflattering audio or video is fake. BOOM Live has already done so. Such cases have been reported.
But Sartz said there are ways AI could help those trying to debunk propaganda campaigns, particularly by measuring whether propaganda had an impact. “Causality is hard to prove right now,” he said. “AI gives us hope that we can get to that by analyzing emotional changes in language, for example.”
Logically has developed a tool that uses AI technology to help quickly identify claims that are harmful or need fact-checking. Another tool in development will help monitor content across multiple platforms, including smaller platforms with weak content moderation that often occurs before harmful content reaches mainstream sites.
Spanish fact-checking media Maldita.es establish Large online platforms took no tangible action against 45% of disinformation posts about the recent EU elections identified by European Fact Checkers.
Some of this content has received millions of views. Rates of disinformation about immigration and election integrity were much higher. 57% and 56% of content was left without action.
The reaction to AI-generated content doesn't seem to be all that different. X (formerly Twitter) and Talk on Tikfurthermore metaInstagram and Facebook's parent company have said they will start labeling, but so far the social media platforms have been slow to respond, experts interviewed for this article said.
“I think this is a good first step for everything we do,” Brandom said. “You've seen some labels, but there's a lot of content out there that's been around for a long time and is clearly AI-generated but isn't labeled. Nonprofit journalism organizations can verify this, so why can’t they?”