Fact check: How AI influences election campaigns – DW – 10/13/2024
A video shows a blue-eyed, blond man in a white shirt checking his ballot paper. Another scene in the same video shows a group of veiled women walking down the street. This video was published on the X account of the far-right AfD party in the eastern German state of Brandenburg ahead of state elections. A similar video has been viewed close to 900,000 times.
These videos try to appeal to our emotions by showing a scary future and offer simple solutions. None of the content is real — the videos have been created with the help of artificial intelligence (AI).
This content can be produced quickly — and it’s cheap and easy. Compared to other, more elaborate AI videos, it’s quite easy to spot that these videos are fake. But if that’s the case, why are they created? DW Fact check looked into this phenomenon of so-called softfakes.
Compared to deepfakes which imitate voices, gestures and movement so well they can be mistaken for the real deal, these softfakes don’t attempt to hide that they are computer-generated.
‘Softfakes’ in political election campaigns
Such softfakes are increasingly used in political election campaigns. The then AfD lead candidate for the European elections, Maximilian Krah, posted tons of AI images on his TikTok account.
The unnatural faces are a dead giveaway — none of the people shown there are real.
France has also seen political parties create AI images ahead of the EU and presidential elections that were meant to stir up emotions (examples here, here, here, here,here and here).
A study that looked at social media accounts of all French parties during the election campaigns found that far-right parties were particularly prone to using such softfakes. Not a single image was labeled as AI-generated, even though that’s what all parties agreed to in a Code of Conduct ahead of the European Parliament elections.
They were to “abstain from producing, using, or disseminating misleading content.” AI-generated content is explicitly mentioned in the code of conduct. Still, parties such as The Patriots, National Rally and Reconquete widely used such content.
These types of images have also appeared ahead of the US 2024 presidential elections. Former US president Donald Trump posted a photo of a woman that was meant to portray US Vice President Kamala Harris addressing a communist-style uniformed group of people — a ploy to claim that Harris was communist at heart.
The problem of such content goes beyond disinformation and the distribution of fake news. It creates alternative realities. Artificial versions of reality are portrayed as being more real than reality itself.
What influences our perception?
But do we accept clearly AI-generated videos and images of an alternative reality as reality simply because of the sheer mass of content?
Back in the 1970s, scientists started looking into people’s reactions to robots that looked and acted almost human. The Japanese robotics engineer Masahiro Mori coined the term “uncanny valley.” The more robots resembled humans, the creepier it would feel.
“We get actually more uncomfortable because we notice a disconnect between what we think it is and what is in front of us,” Nicholas David Bowman, editor-in-chief of Journal of Media Psychology and associate professor at the Newhouse School of Public Communications at Syracuse University, told DW.
“It makes us uncomfortable, because we cannot reconcile. We are feeling this uncanniness because we know it is wrong.”
What happens when AI-generated images pass through the uncanny valley, and we don’t find them creepy anymore though?
“Once we pass the uncanny valley effect, we won’t even know it. We will probably not know the difference,” he said.
But we aren’t there yet. “People are having those gut reactions when they see a video. This is our best detector as to whether or not something is AI-generated or real,” he said.
It gets tricky if people tried to ignore that gut feeling, because they want to believe that the fake is real, he said. “People can turn that off — I am not trying to detect because I already agree with the beliefs and it is aligning with what I want to see,” Bowman added. “If you are a partisan, far left or far right, and you see content that is not real, you just don’t care because you agree with the content.”
Influence of AI poses a risk to our information environment
The use of deepfakes and softfakes in election campaigns is on the rise. That’s also something Philip Howard has noticed. He’s the co-founder and president of the International Panel on the Information Environment (IPIE), an independent global organization dedicated to providing scientific knowledge on threats to our information landscape.
For a recent study they reached out to over 400 researchers from more than 60 countries. More than two thirds believe that AI-generated videos, voices, images, and text have negatively impacted the global information environment. More than half believe that these technologies will have a negative impact in the next five years.
“I do think we should be past the point of industry self-regulation,” Howard told DW.
“Now, the AI firms are auditing themselves. They’re grading their own homework,” he added.
But that, he says, is not enough due to the lack of independent scrutiny.
“If we can get regulators to push for independent audits so that independent investigators, journalists, academics can look under the hood, then I think we can turn things around.”
This article was originally published in German.