Strategists in both parties are bracing for an October or November surprise coming in the form of a deepfake video clip that they worry could have a significant impact on the final days of the presidential race, recognizing that AI counterfeits have become very advanced.
Lawmakers in both parties have warned for months that sophisticated foreign actors are trying to influence the outcome of the presidential election, but experts say an October AI-generated bombshell could emerge from a domestic source as well.
Both former President Trump and Vice President Harris have been the targets of deep-fake images.
An AI-enhanced video circulated on social media platforms this week that falsely showed Trump fumbling, bumbling and totally losing control while he worked at a McDonald’s drive-thru window, a takeoff on his recent campaign appearance at a restaurant in Bucks County, Pa.
Trump’s high-profile shift at the drive-thru was also the target of a disinformation campaign as the restaurant in Feasterville was so flooded with phony reviews and ratings that Yelp, the online restaurant ratings app, was forced to disable reviews for the business.
A fake photo that surfaced in August purported to show a younger Trump years ago sitting next to convicted sex offender Jeffrey Epstein on a private plane. The image circulated on the social media platform X, racking up hundreds of thousands of views, but was later determined by experts to be AI-generated.
Trump claimed without evidence in August that Harris had used AI to fabricate fake images of massive crowds at her campaign rallies, even though those crowds were witnessed by reporters who attended the events.
“She’s a CHEATER. She had NOBODY waiting and the ‘crowd’ looked like 10,000 people!” Trump posted on social media after he saw photos showing throngs of people cheering for Harris at an airplane hanger outside of Detroit.
Meanwhile, the Microsoft Threat Analysis Center reported this week that “Russian actors” are trying to tarnish Harris with deepfake videos portraying her in an unflattering light.
According to Microsoft, Russian language accounts on X posted an AI-enhanced video falsely depicting Harris making “a crass reference” to assassination attempts Trump and saying that he refused “even to die with dignity.”
“Deepfake technology has gotten better over the years, certainly more saw than when we first really saw it in 2016,” said Democratic strategist Rodell Mollineau. “It has gotten more nuanced.”
“It’s not the deepfake itself” that poses the biggest danger, but the lack of willingness of social media companies to investigate and stop it from spreading, he said.
“Thinking specifically about Twitter and Elon Musk,” Mollineau added.
Musk, the controlling owner of X, the popular social media platform formerly known as Twitter, has repeatedly come under criticism from Democrats for sharing fake or misleading information.
Musk in August recirculated a doctored Kamala Harris campaign ad that used AI-generated voiceover to make it appear that Harris described herself as unfit for the Oval Office. Musk, who has endorsed Trump, later defended the move as sharing something that people should have known was a parody.
While these fake images and videos so far appear to have had little impact on voters, political strategists and non-partisan experts expect the problem to get worse over the next 12 days and warn a potential bombshell claim could affect the outcome in some states.
“I worry — if I think about what thing is going to turn the race on its head in the last two weeks — the thing we have not yet seen that everyone who pays attention to this stuff has been focused on two years is when is there going to be some misinformation, disinformation, fake news, particularly deep-fake driven that comes along … and it creates chaos and havoc” in the final days of the campaign, journalist John Heilemann warned this week on the “Hacks on Tap” podcast.
Veteran Republican strategist Mike Murphy, who was part of the podcast discussion, agreed that disinformation, especially AI-generated content, could spread confusion through the electorate.
Charlie Kirk, a prominent conservative social media personality, warned his followers to look out for “insanely desperate stuff from Democrats.”
“Expect fake AI generated crap about Trump coming soon. Stay focused AND VOTE!” he posted on the social media platform X.
Experts worry that social media companies will be slow in taking down disinformation and AI-generated deepfakes that pop up in the run-up Election Day.
“The threat here is, from a candidate perspective, that once something gets out there, it’s really hard to unlearn it for voters, even if it’s not true. … We think it’s going to be very difficult to get social media companies to crack down,” said Joshua Graham Lynn, the CEO and co-founder of RepresentUs, a non-partisan organization that tracks threats to democracy.
Lynn said his organization is especially concerned about efforts to confuse voters about when and where to vote, or possibly fake warnings about threats at a polling place that could keep people at home on Election Day.
His group has tried to “inform voters that there could be an attack to ‘pre-bunk’ the attack, so that if they’re told, ‘Don’t go vote, there’s a threat at your polling station, or the polling station is closed or you’re supposed to go tomorrow,’ that would be an opportunity where voters might say, ‘That doesn’t sound right, maybe I should check,’” he said.
“It’s much more likely that it would be one or two polling stations in one or two critical districts that could tip the election,” he said.
Tom Barrett, a Republican running for a congressional seat in Michigan, is facing calls for an investigation after his campaign ran an ad in a Black-owned newspaper that listed the wrong date for Election Day: Nov. 6 instead of the correct date of Nov. 5.
Lynn, the head of RepresentUs, said advances of technology have dramatically boosted the ability of individuals to spread disinformation through social media.
“What used to take 10,000 Internet trolls sitting in a warehouse somewhere in Russia to attack our elections now can be done by one person in a basement. Anybody has the power to crank out so much content and optimize it so it gets sticky,” he said. “They might release hundreds of things that don’t stick until they find what does, and then they push that viral.”
Senate Intelligence Committee Chairman Mark Warner (D-Va.) told reporters this month that he was concerned about foreign agents spreading disinformation related to the disaster relief efforts in North Carolina and Georgia — two important presidential battleground states.
Warner on Thursday wrote a letter to American Internet domain registrars urging them to take immediate steps to crack down on the use of their services by foreign operatives to influence the election.