False information around the identity of the assailant behind a mass stabbing in the town of Southport on 29 July 2024 triggered a wave of violent disorder across the UK, fueled by the spread of false information online. Resolver worked alongside our social platform partners to understand and mitigate the tactics of far-right actors exploiting platform functionalities to incite violence and broadcast anti-migrant hate.
Our investigation reveals how malicious groups and individuals coordinated activities across an array of platforms and exploited various platform functionalities to spread inflammatory content and organize on-the-ground unrest, while evading content moderation efforts. Their efforts range from coordination on accounts on private messaging apps to monetized live streams on mainstream platforms and creation of short-form video to flood conversations across platforms. Below is an analysis detailing how the tactics played a role in spurring violent disorder across the country.
False claims about attacker’s identity go viral
Hours after the mass stabbing, a now-deleted social media post falsely identified the assailant as “Ali al-Shakati”, a Muslim “asylum seeker” who ‘recently’ arrived in the UK. A review of the term “Ali al-Shakati” across social platforms between 20 July to 12 August 2024 revealed it received over 68,000 mentions from 48,000 unique authors. Engagement received by these posts peaked from 29-30 July with over 28 million impressions. Altogether, posts referencing this fictitious persona accrued more than 79 million impressions across social platforms over the examined time frame.
The spike in engagement pre-dates the release of the suspects’ real identity by the UK courts on 1 August. According to UK law, minors cannot be identified in the media prior to turning 18, however the Judge presiding over the case made the rare exception due to growing public disorder around the incident.
Far-right actors were able to leverage the information vacuum created by heightened public interest in the case and a lack of credible information available to stoke further divisions and promote false and anti-migrant narratives. This included accounts belonging to far-right influencers and organizations who proceeded to amplify this false information to their large followings on mainstream and alt-tech platforms.
Other posts shared by the same accounts also promoted Islamophobic, dehumanizing and anti-migrant narratives. Some of this content included explicit calls for violence against migrants and unverified claims that the incident was terror related. Analysts at Resolver also observed multiple instances of users employing hateful Generative AI (Gen AI) imagery to claim the anti-migrant protests represented “British patriots rising”.
Private messaging apps used to organize violent demonstrations
Graphic video footage depicting violent clashes between protesters and police, including the use of police dogs, were widely amplified across mainstream, alt-tech and private messagings apps. In particular, accounts belonging to popular far-right and manosphere influencers – online communities characterized by misogynistic views and opposition to feminism – reposted clips of the clashes, justified the violence as a struggle against foreign “barbarians” and shared claims of alleged police brutality against anti-migrant protesters.
Posts amplifying such narratives amassed millions of views across multiple social platforms while the spike in online discourse around the topic helped propel such incendiary content to the “trending” section of social platforms, further boosting its reach among a domestic audience.
Simultaneously, accounts belonging to far-right groups and neo-Nazi active clubs on a private messaging app were used to incite users and organize anti-migrant protests. Posts in such accounts provided lists of targets including addresses and details of immigration services across the UK, exhorted users to participate in the riots and provided advice on how to maintain their anonymity and deal with authorities if arrested.
Private messaging and encrypted messaging platforms are often favored by fringe and extremist groups to coordinate real-world actions. The heightened privacy and anonymity offered by such platforms, and the ability for individual “chat groups” to serve as echo chambers for false and inflammatory narratives can provide ideal conditions for fringe and extremist beliefs to thrive.
Monetized Live streams used to broadcast anti-migrant violence
As anti-migrant violence raged across the country, recorded and live streamed footage of the public disorder, including attacks on public institutions and hotels housing asylum seekers were widely broadcast across mainstream social media platforms.
These streams predominantly consisted of bystander footage, and were broadcast by anti-migrant users, far-right activists, protest participants and other users dedicated to streaming protests across the UK. While the “Live comments” section of such streams featured comments from other users that glorified the public disorder and called for further attacks on migrant communities.
Several streams employed Panoptic live streams featuring feeds from different platforms and different locations, in effect displaying multiple streams broadcast onto one screen further expanding their coverage of the public disorder across the country. The reach of such content was also boosted by users employing third-party multistream services (a software that allows users to publish live streams to multiple platforms) to simultaneously broadcast across several platforms.
Analysts at Resolver also discovered multiple instances of far-right groups on a private messaging app promoting monetized accounts on mainstream video platforms, revealing coordination efforts across these platforms. These accounts live streamed various anti-migrant demonstrations taking place around the country. There is a possibility that such accounts may be affiliated or supported by such far-right groups.