Hate Speech, false claims, and CIB target voters during EU Elections 2024

Karley Chadwick
Lead, Trust and Safety, Resolver, a Kroll business
Ayushman Kaul
Data journalist, Resolver, a Kroll business
· 5 minute read

Resolver’s analysis of the online discourse leading up to and during the EU elections found that the narratives around the farmer protests, the misuse of Gen AI technology to create political imagery and the execution of coordinated inauthentic behavior (CIB) to amplify anti-migrant narratives were used to influence and disrupt democratic discourse. Our analysts observed spikes in false and polarizing claims targeting European governments and prominent politicians on social media, often coinciding with key events during the electoral campaign. These narratives polarize communities and can be leveraged by fringe and extremists groups to justify off-platform acts of violence in several countries heading to the polls including. Slovakia, Germany and Denmark.

Over the electoral campaign Resolver’s Imminent Threat team provided 24/7 coverage, alerting partners to imminent and ongoing acts of violence targeting politicians participating in the election. Notable flashpoints included the attempted assassination of Slovak Prime Minister Robert Fico on May 15 and violent incidents in the city of Mannheim in Germany on May 31 and June 4 that resulted in the death of a police officer and injury of Heinrich Koch, a politician with the Alternative für Deutschland (AfD) party, respectively. Risk intelligence provided by our team was instrumental in aiding partners to tackle the spread of violative content that could have galvanized other users to commit similar attacks from across their platforms.

Approximately 359 million eligible voters participated in elections across 27 EU countries between 6-9 June. Elected candidates, from 10 official parties, have formed the new European Parliament. At present, the center-right European People’s Party Group (EPP) has amassed the most seats with, at present, 190, a 14 seat gain. At the time of publication, election results are preliminary, due to vote counting still taking place.

Disinformation targets politicians and electoral processes

Social media plays an integral role in uniting the large and diverse collective of voters in the EU, with its usage across the continent at 59% on average according to data compiled by Eurostat. With this growing dependence on social media to inform the public, comes a sharp increase in its malinformation; a term used to describe when disinformation actors share information that stems from the truth but is presented in an exaggerated and miscontextualized manner to mislead a target audience.

Analysts at Resolver observed the use of coordinated networks of accounts exhibiting signs associated with inauthenticity including high levels of activity to disseminate misrepresented or doctored images and statistics that were presented as legitimate across various mainstream social media platforms.

Meanwhile, analysis by the European Digital Media Observatory, a fact-checking network, found that EU policies and institutions were the most popular targets for mis-and disinformation amplified online ahead of the EU elections, accounting for 15% of all cases detected by the organization in May 2024. This included the use of old and unrelated photos providing false information on how to vote, old voter clips that discouraged individuals from voting and manipulated and AI-generated images that discredited senior EU politicians.

Mis-and disinformation targeting the eu elections on social media included manipulated images that falsely depicted ursula von der leyen being arrested for corruption.

Examples of posts using manipulated and miscontextualized images to target senior EU politicians across social media platforms. Source: (Resolver)

Researchers have also identified the presence of multiple foreign influence operations targeting the EU elections. On May 2, EUvsDisinfo published an investigation highlighting a network of coordinated websites that impersonated Western media outlets and published content from Russian-state owned media. These articles sought to stoke political polarization ahead of the vote around multiple politically significant issues including EU support for Ukraine, the enforcement of environmental regulations in the farming sector, and levels of migration to the continent.

CIB used to amplify anti-migrant narratives

Migration has been an evergreen topic of public discourse in Europe, with public interest in the issue notably spiking following the 2015 migrant crisis. Since then, multiple far-right and far-left groups on the continent have leveraged the issue to amplify polarizing and anti-migrant narratives, recruit new members and organize off-platforms acts of violence targeting migrant and religious minority communities.

Resolver has seen increased use of CIB to amplify anti-migrant narratives designed to exacerbate pre-existing tensions in various European countries. This disinformation tactic is characterized by coordinated activity by accounts exhibiting behaviors associated with bot-like and inauthentic behavior. These behaviors include accounts engaging in high volumes of activity, posting on disparate topics including in different languages, and engaging in hashtag and topic spamming in a bid to hijack the ‘trending’ or ‘for you’ section on mainstream platforms.

Some of the most widely amplified anti-migrant narratives targeting the eu elections promoted the great replacement and kalergi plan conspiracies on social media.

Examples of posts on mainstream platforms promoting anti-migrant conspiracy theories. Source: (Resolver)

Some of the most widely amplified narratives relate to the Great Replacement and Kalergi Plan, two infamous anti-migrant conspiracy theories espousing a belief that the white indigenous populations of Western European countries will be replaced by non-white migrants and refugees, with a broader theme of driving distrust in key European governments and the integrity of the European political system. Such theories are often deployed to justify antisemitic and islamophobic hate targeting minority and migrant communities living across the continent.

Resolver analysts observed CIB networks amplifying these pre-existing narratives by disseminating false information including in the form of miscontextualized or graphic images and videos designed to stoke tensions in European communities. Examples of false claims being amplified ahead of the election included disinformation that Polish nationals will be mobilized to fight for Ukraine, that 97% of those on Spain’s minimum basic income are migrants and allegations that white Irish citizens are being made homeless in efforts to house incoming refugees.

Bad actors attempted to use false and misleading claims designed to stoke tensions ahead of the eu election.

Examples of posts amplifying false claims that migrants account for 97% of those on basic minimum income in Spain. Source: (Resolver)

As inauthentic networks engaging in CIB flood various user personalized or ‘similar’ pages and recommendations, everyday users become more and more exposed to their content, altering their view of what is accurately taking place within a given event. In the context of an election, CIB networks can very quickly alter the public discourse by amplifying policy issues such as national security, migration and the integrity of candidates which will affect a voter’s desire to support them.

Political Gen AI images deployed to influence voters

Ahead of the election, public anxieties over the misuse of Gen AI technology in information operations targeting voters in elections taking place in 2024, including the EU election prompted 27 large technology companies and social media platforms to sign the Tech Accord to Combat Deceptive Use of AI in 2024 Elections on February 16, 2024. Signatories to the agreement pledged to collaborate with one another to detect and counter harmful AI content designed to sway voters across online platforms.

In the context of the EU elections, the misuse of GenAI services to create synthetic content impersonating key political or otherwise prominent figures was not as high risk as in the recent Indian general election. However, our analysts observed some misuse of popular GenAI services to create imagery in relation to highly contentious election issues, such as farmers protests and migration.

Bad actors misused gen ai platforms to create political imagery targeting voters in the eu election.

Examples of users misusing Gen AI platforms to produce political imagery related to contentious election issues. Source: (Resolver)

Provocative images of hay bales stacked up outside of the Eiffel Tower in protest and homeless individuals camping on the streets of Ireland were shared across social media by users attempting to sway voters, particularly in countries with ongoing farmer protests such as France, Spain and Belgium.

In addition to augmenting the capabilities of malicious actors engaging in influence operations targeting European voters, the proliferation of synthetic content across social media platforms can fuel increased levels of public skepticism regarding the authenticity of information encountered online. This declining trust can then be exploited by unscrupulous political actors to evade oversight and accountability by casting doubt over the authenticity of any incriminating evidence.

Conclusion

Our analysis of the information environment around the EU elections found that despite the adoption of robust Trust and Safety measures by mainstream platforms, bad actors continued to leverage their reach with voters to amplify malinformation, synthetic content and anti-migrant narratives designed to exacerbate social tensions and undermine public confidence in the integrity of the election. Moreover, false and inflammatory narratives related to national security and migration were amplified by coordinated networks of accounts exhibiting behaviors associated with inauthenticity.

While the use of CIB in influence operations on social media platforms is not a new issue, with growing public awareness in its existence, content disseminated via such inauthentic networks, are by their very nature, hard to detect as false. That, coupled with mounting distrust in government due to rising cost of living, perceived lack of transparency and residual anger following COVID-19 measures has bolstered bad actors’ ability to remain on platform and proliferate harmful content to users.

Our Platform Trust and Safety solutions offer partners a fully managed service designed to enhance community safety with integrated content, bad actor detection and rapid alerting to the latest emerging threats including the misuse of Gen AI services, promotion of hate speech, mis-and disinformation and threats of violence.

 

Table Of Contents

    Request a demo

    If you see this, leave it blank.