Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Trump Shares AI Photo of Kamala Harris—What It Means for the US Election

Donald Trump shared an AI-generated image of Kamala Harris on social media ahead of her address at the Democratic National Convention in Chicago.
The DNC, which ends Thursday, has seen President Joe Biden, first lady Jill Biden, Hillary Clinton and the Obamas address Democrats. Harris’s running mate, Tim Walz, is scheduled to speak on Wednesday.
With many polls showing a Harris lead over Trump, Republicans have been attempting to land an effective punch against the vice president. However, they lost many of their lines of attack after Biden stepped down as the Democratic national candidate.
The Trump campaign has focused heavily on what it calls Harris’s “radical” agenda, relying on her legislative and voting record as proof.
Ahead of Harris’ appearance, Trump took to X, formerly Twitter, to share a fake, AI-created image of the vice president speaking at a Chicago stadium.
The image shows the back of what is meant to be Harris onstage under a hammer and sickle flag with an audience of supporters in Mao suits and the word “Chicago” lit up in red over the crowd.
The photo, posted by Trump on August 18, has the hallmarks of AI image generation with several visual inconsistencies beyond the cartoonish nature of its premise—the faces of most of the audience members along the front row of the photo are warped and distorted. The other flags at the back of the stadium are blurry and do not match immediately recognizable crests or patterns.
Newsweek has contacted a media representative for Kamala Harris via email for comment.
Donald Trump’s spokesperson, Steven Cheung, told Newsweek that the photo “Seems like an accurate description of Comrade Kamala and her Communist positions.”
While the post may be recognizable as satire, Trump’s use of an AI-generated image of his opponent is significant in an election contest where the misinformation threat posed by AI is greater than ever.
Although companies like OpenAI have said they have adapted their technology to protect the public against AI-assisted misinformation during the 2024 election, the threat of the technology’s misuse is both tangible and felt by voters. A survey published by Elon University in May 2024 found that 73 percent of Americans believe it is “‘very'” or “‘somewhat'” likely that AI will be “used to manipulate social media to influence the outcome of the presidential election.”
Seventy percent of respondents also said it would be likely that the election “will be affected by the use of AI to generate fake information, video and audio material.”
Trump has already caused controversy by sharing AI-generated images depicting Taylor Swift supporting him, despite the popstar having not publicly endorsed either Trump or Harris. Some of Swift’s fans felt that there should be consequences for the former president’s use of artificially generated images.
“Him posting this should be illegal. He knows it’s not real. He knows it’s AI but still posted it to interfere with the election,” one fan wrote on X.
After posting the AI-generated images of Harris online, Newsweek spoke to several experts in campaign ethics, law, and AI on what punishments or liability Trump could face for sharing false AI images of the vice president during the election.
John Zerilli, a professor in AI, Data & Law at the University of Edinburgh, Scotland, said there were limited grounds for action against using AI images, like the one Trump shared, in the United States.
“In the U.S., image privacy is based on one’s ‘right of publicity,’ but this would only prevent Trump from using Kamala’s image for commercial purposes,” Zerilli said.
“An image of Kamala used in a political campaign probably won’t fall under the scope of Kamala’s right of publicity.
“Another option would be for Kamala to seek protection against publicity which places her ‘in a false light in the public eye’—but even then, I suspect that political campaigning muddies the waters.”
“There might be a basis for her to allege defamation, but both because it’s not defamatory to call someone a communist and because we’re dealing with a political campaign, I don’t believe that would fly,” he said.
Luke McDonagh, an associate professor of law at the London School of Economics and Political Science and an expert on intellectual property rights, said that while some jurisdictions, such as Singapore, have laws against “fake news,” which could include AI images of political rivals, there is no equivalent in the U.S.
“Right now, one problem for Harris is that by the time she succeeded in a defamation action, the election cycle will have moved on, or indeed, the election may be over,” McDonagh said.
“It can take months to resolve such legal cases. So the short answer is that the law does not provide a rapid tool to combat these kinds of fake images.”
Jonathan Bright, head of AI for Public Services at the U.K.’s Turing Institute, added there was little chance that the Harris campaign would pursue legal action against such images, least of all given the attention it would create.
“Indeed, I think we are going to see lots more of this type of image where (like most other political campaigning) it will be used to try and raise the salience of particular issues in the minds of voters, even if the fact that the images are manipulated is readily apparent to most people,” Bright said.
“For example, slightly manipulated videos drawing attention to the issue of Biden’s age were widely distributed over the last few years. It didn’t matter if they were ‘debunked’ because it raised the salience of this issue in the minds of voters.
“This image seems to be a similar kind of play.”
In much the same way that the sheer number of falsehoods and misleading statements Trump has made throughout his political campaigning, starting from his 2016 entry into politics, the speed with which AI can be used to create and share bogus or hoax media makes the notion of regulating it even more challenging.
Cary Coglianese, the Edward B. Shils Professor of Law and Professor of Political Science at Penn Carey Law, who has written on the topic of regulating AI, added that even if there are remedies to take action against fake images, with less than 80 days until the election, none could sufficiently resolve the issues before the American public vote.
“In fact, they could potentially backfire by giving more attention to false information—which would presumably advance Trump’s goal of spreading falsehoods further,” he said.
It was only earlier this month that Trump supporters falsely accused the Harris campaign of using AI to enhance the size of crowds at rallies.
“This is a typical strategy deployed by Trump of what can be called preemptive projection: accuse his opponent of cheating or playing unfair, which then softens up his followers (and perhaps swing voters) so they accept his own obvious cheating,” Coglianese added.
“No one should be surprised that this is occurring or will continue to occur in the weeks to come. The risk for Trump is whether it actually works, as it will undoubtedly raise in some swing voters’ minds the question of whether they want someone who traffics so much in falsehoods to occupy the White House.
“In the end, this seems hardly a tactic to bring over the independent voters that he will need to prevail in November, even if it seems motivating to his base.”

en_USEnglish