AI and Elections: How generated Content influences Political Opinion
Gesa Feldhusen
29.10.2024
AI-generated images are increasingly shaping political discourse and election campaigns, both in the USA and in Europe. They offer new opportunities to influence opinions, but also harbour risks - from distorted realities to the reinforcement of stereotypes. This article examines the opportunities and challenges of AI images in a political context.
The next presidential elections in the USA are coming up, and the election campaign is also highly relevant for us in Europe. It is an election campaign that shows how content generated by artificial intelligence (AI) can be used as a tool to influence political discourse. And this, it seems, primarily in and from the political spectrum from conservative to right-wing. In the European context, there are examples of the AfD in Germany and the SVP and FDP in Switzerland, among others, using AI-generated content for political campaigns. The fact that technological developments are shaping political discourse is undisputed, but looking back to the 2024 European elections and forward to the presidential elections, I wonder
How will AI-generated images influence current and future elections?
Of course, the complexity of this question is beyond the scope of a blog post of this length, but I would still like to try to approach the topic with a few thoughts.
The Influence of Visual Content
It is not only through the use of AI-generated images that opinions can be influenced and realities distorted by visual content. They have long been a means of shaping political opinion. Images and photographs, whether in the media or in political communication by governments, are incorporated into political discourse as a tool for tolerance and understanding and thus contain the ability to shape politics (Sontag, 1977 & Johnson, 2018 & Bleiker, 2018). The current elections are not the first in which technology has been used to generate content to influence discourse. During the 2018 US elections, for example, a deepfake video of Obama made the rounds (Mak, 2020).
However, we are currently seeing a new far-reaching dimension that is being fuelled by AI technologies. We are increasingly seeing visual content online that is generated by AI. If I Google ‘Swiss Citizen’, images of Swiss passports appear; if I search for ‘German Citizen’, the results show a German passport and an AI-generated image of a white, blonde woman proudly holding a German flag. And when I search for ‘Gambian Citizen’, the first three images are of an AI-generated woman, while very few images of a passport are shown.
These search queries illustrate that AI-generated content is widespread and often represents Western perspectives - a consequence of the problem that AI images are based on prejudiced data sets that reproduce stereotypes (Turk & Turk, 2024). Such images can be generated by all internet users or even by search engines themselves and in turn end up on social media and in results of new search queries. It is not always clear to users browsing the platforms which images are AI-generated. As a result, opinion-forming in the digital space is becoming increasingly complex.
The Election Campaign in the USA as a Forerunner
«Trump Promotes A.I. Images to Falsely Suggest Taylor Swift Endorsed Him» was the headline in the New York Times in August. AI-generated images made it look as if Taylor Swift was calling on her fans to vote for Trump. Other generated images showed Black women voters standing together behind Trump (Spring, 2024). Trump and his supporters are using AI tools to create the impression that, contrary to expectations, Black and FINTA people will vote for him in large numbers on 5 November.
The fact that politicians are allowed to use such images and thus the identity of public figures or marginalised groups for their election campaigns is highly problematic.
Even if some of this content seems absurd, it should be viewed highly critically. This is because they ‘[...] become useful tools for spreading false, sometimes racist messages with a clear political bias - and candidates and their supporters are among those who share them on social media.’ (Merica et al., 2024) On closer inspection, many of these images are recognisable as AI-generated, but this is often lost in the media overload, and the tools are getting better and better. Many users are unable to distinguish AI-generated images from real photographs. As the Digital Barometer 2024 by the Risk Dialogue Foundation shows, for example, users lack the skills they need to navigate the internet safely and independently. Even before ChatGP and AI-generated images, it was known that people have difficulties assessing the authenticity of virtual images (Jingnan, 2024). Even though it is important that we as users develop the necessary skills to navigate the digital world safely, the responsibility cannot be shifted to the individual alone. Platforms must be more strictly regulated in order to control the distribution of AI-generated content. However, as is generally the case with topics such as disinformation or hate speech, platforms take little responsibility - probably also because AI harbours new, profitable business models for them (Fichter, 2024).
AI Content in the German & Swiss Political Landscape
We are also seeing the use of AI-generated content in election campaigns in European elections: Maximilian Krah, former AfD lead candidate, used numerous AI-generated images on his TikTok account to mobilise voters during the campaign for the 2024 European elections. For example, a young man with blonde hair, blue eyes, prominent cheekbones and a white shirt is shown writing his cross on the ballot paper in blue ink (Breuer, 2024). This not only suggests a certain image of the typical AfD voter, but also uses visual features and symbols that ideologically draw on far-right stereotypes and are specifically associated with AfD supporters.
In Switzerland, the FDP recently launched an AI-generated election poster featuring ‘climate stickers’ blocking road traffic (Aregger, 2023). Such scenes have existed in the past, but the image was designed in such a way (blocking an ambulance) that it fitted perfectly into the FDP's election campaign - above all to fuel sentiment against climate activists and their actions.
SVP National Councillor Andreas Glarner also used AI at the beginning of the year when he had videos made of National Councillor Sibel Arslan (Green Party) in which she makes xenophobic statements and calls on people to vote for him. Arslan took legal action against Glarner for misuse of identity (Conzett, 2024). In response, Glarner dismissed the use of AI-generated content as a joke and gag and trivialised the issue, which says all the more about its potential danger. This is because this content is not funny, but rather has a significant influence on how political discourse is conducted, conveys a false picture of public opinion and makes it even more difficult for citizens to judge which sources they can trust. Not to mention the fact that Andreas Glarner misused his identity to spread right-wing populist statements.
Rayna Breuer (2024) summarises the problems associated with this in her article for Deutsche Welle: ‘Alternative realities are being created. They create a world in which artificial versions of reality appear more real than reality itself.’
Conclusion
I observe current developments with great concern: AI-generated content that manipulates political discourse and creates supposed moods is increasingly appearing in the conservative to right-wing spectrum. When the AfD creates fictitious voters, Trump supporters instrumentalise marginalised groups or SVP politicians misuse identities to score points in the election campaign, this shows once again that the regulation of digital technologies is being discussed too late. Although these mechanisms are not new, they have reached a worrying new dimension. It is crucial that AI-generated content is clearly labelled and that platforms and the operators of AI tools are held more accountable. In such a tense political situation, we do not need content that creates false realities and further undermines trust in our institutions.