Election interference is increasingly dependent on artificial intelligence and deepfakes. That’s why a viral public service ad uses them as a warning sign.
“In this election, bad actors are going to use AI to trick you into not voting,” the ad says. “Don’t fall for it. This threat is very real.”
The “Don’t Let AI Steal Your Vote” video features Hollywood stars such as Rosario Dawson, Amy Schumer, Chris Rock and Michael Douglas. But many of them aren’t real. For example, Douglas, Rock and Schumer are deepfakes.
“The artists involved in this were super excited about it,” Joshua Graham Lynn, CEO and co-founder of RepresentUs, the national and nonpartisan anti-corruption organization behind the video, told Scripps News.
“Everyone you see there has given us their likeness or personally volunteered. They were all super excited to do this to help get the vote out because they know this is a very important election,” Lynn added .
RELATED STORY | Scripps News was deepfaked to see how AI could influence the election
The video, which has been viewed more than 6 million times on YouTube, warns voters to pay closer attention to what they see and hear online.
“If something isn’t right, it probably is,” the real Rosario Dawson says in the video.
“Right now it’s so hard to tell what’s real and what’s fake on the internet,” Lynn said. “You just watch a new video and sometimes you don’t know if it’s made entirely by AI.”
“Technology is developing rapidly and, more importantly, malicious actors will always be at the forefront,” he added.
Disinformation experts and community leaders have pointed out that AI-generated content is being used to sow chaos and confusion around the elections. The Department of Homeland Security, ABC News reported, warned state election officials that AI tools could be used to “create fake election data; impersonate election personnel to gain access to sensitive information; generate fake voter calls to overwhelm call centers; and in a more convincing way. spread false information online.”
“And so what we want is for voters to use their brains,” Lynn said. “Be skeptical if you see something that tells you not to participate. If you see something about a candidate you support, question it. Double-check.”
While deepfakes can be used to spread disinformation about elections, experts warn they can also be used to destroy the public’s trust in official sources, facts or their own instincts.
“We have situations where we all start to doubt the information we come across, especially information related to politics,” Purdue University professor Kaylyn Jackson Schiff told Scripps News. “And with the election environment we’re in, we’ve seen examples of claims that real images are deepfakes.”
Schiff said this phenomenon, this widespread uncertainty, is part of a concept called “the liar dividend.”
“Being able to credibly claim that real images or videos are fake due to widespread awareness of deepfakes and manipulated media,” she said.
RELATED STORY | San Francisco is suing websites used to create deepfake nude photos of women and girls
Schiff, who is also co-director of Purdue’s Governance and Responsible AI Lab, and Purdue University doctoral candidate Christina Walker have been tracking political deepfakes since June 2023, recording more than 500 cases in their Political Deepfakes Incidents Database.
“A lot of the things we capture in the database, the communication purpose is actually satire, so almost more akin to a political cartoon,” Walker told Scripps News. “It’s not always because everything is very malicious and intended to cause harm.”
Still, Walker and Schiff say some of the deepfakes mean “reputational damage,” and that even parody videos intended for entertainment can take on new meaning when shared out of context.
“It is still a concern that some of these deepfakes, initially propagated for fun, could mislead individuals who do not know the original context if that message is later reshared,” Schiff said.
While the deepfakes in the “Don’t Let AI Steal Your Vote” video are difficult to spot, Scripps News took a closer look and found that visual artifacts and shadows disappeared. The technology of deepfakes has improved, but Walker said there are still telltale signs for now.
“These could be extra fingers or missing fingers, blurred faces, writing in the image, not being quite right or not lining up. These things can all indicate that something is a deepfake,” Walker said. “As these models get better, it becomes harder to tell. But there are still ways to check the facts.”
Fact-checking a deepfake or other video that provokes an emotional response, especially around the elections, should start with official sources such as secretaries of state or stem.gov.
“We encourage people to look for additional sources of information, especially when it comes to politics and close to an election,” Schiff said. “And also just thinking about who the source of the information is and what motivations they might have for sharing that information.”
“If there’s anything that says to you as a voter, ‘Don’t go to the polls. Things have changed. There’s a disruption. Things have been postponed. You can come back tomorrow,’” double-check your sources. That’s the most important thing. thing now.”