False information around the identity of the assailant behind a mass stabbing in the town of Southport on 29 July 2024 triggered a wave of violent disorder across the UK, fueled by the spread of false information online. Resolver worked alongside our social platform partners to understand and mitigate the tactics of far-right actors exploiting platform functionalities to incite violence and broadcast anti-migrant hate.
Our investigation reveals how malicious groups and individuals coordinated activities across an array of platforms and exploited various platform functionalities to spread inflammatory content and organize on-the-ground unrest, while evading content moderation efforts. Their efforts range from coordination on accounts on private messaging apps to monetized live streams on mainstream platforms and creation of short-form video to flood conversations across platforms. Below is an analysis detailing how the tactics played a role in spurring violent disorder across the country.
False claims about attacker’s identity go viral
Hours after the mass stabbing, a now-deleted social media post falsely identified the assailant as “Ali al-Shakati”, a Muslim “asylum seeker” who ‘recently’ arrived in the UK. A review of the term “Ali al-Shakati” across social platforms between 20 July to 12 August 2024 revealed it received over 68,000 mentions from 48,000 unique authors. Engagement received by these posts peaked from 29-30 July with over 28 million impressions. Altogether, posts referencing this fictitious persona accrued more than 79 million impressions across social platforms over the examined time frame.

Graph showing the frequency of mentions and the reach of posts featuring ‘Ali al-Shakati’ across mainstream social platforms between 20 July to 12 August, 2024.
The spike in engagement pre-dates the release of the suspects’ real identity by the UK courts on 1 August. According to UK law, minors cannot be identified in the media prior to turning 18, however the Judge presiding over the case made the rare exception due to growing public disorder around the incident.
Far-right actors were able to leverage the information vacuum created by heightened public interest in the case and a lack of credible information available to stoke further divisions and promote false and anti-migrant narratives. This included accounts belonging to far-right influencers and organizations who proceeded to amplify this false information to their large followings on mainstream and alt-tech platforms.
Other posts shared by the same accounts also promoted Islamophobic, dehumanizing and anti-migrant narratives. Some of this content included explicit calls for violence against migrants and unverified claims that the incident was terror related. Analysts at Resolver also observed multiple instances of users employing hateful Generative AI (Gen AI) imagery to claim the anti-migrant protests represented “British patriots rising”.

Examples of posts that promoted false information and called for violence against minorities across mainstream and alt-tech platforms.

Examples of posts that employed hateful Gen AI imagery to promote Islamophobic narratives across mainstream social platforms.
Private messaging apps used to organize violent demonstrations
Graphic video footage depicting violent clashes between protesters and police, including the use of police dogs, were widely amplified across mainstream, alt-tech and private messagings apps. In particular, accounts belonging to popular far-right and manosphere influencers – online communities characterized by misogynistic views and opposition to feminism – reposted clips of the clashes, justified the violence as a struggle against foreign “barbarians” and shared claims of alleged police brutality against anti-migrant protesters.

Far-right and manosphere influencers shared clips of clashes between protesters and police and attempted to justify the violence as an indigenous struggle against foreign “barbarians”.
Posts amplifying such narratives amassed millions of views across multiple social platforms while the spike in online discourse around the topic helped propel such incendiary content to the “trending” section of social platforms, further boosting its reach among a domestic audience.
Simultaneously, accounts belonging to far-right groups and neo-Nazi active clubs on a private messaging app were used to incite users and organize anti-migrant protests. Posts in such accounts provided lists of targets including addresses and details of immigration services across the UK, exhorted users to participate in the riots and provided advice on how to maintain their anonymity and deal with authorities if arrested.

Far-right groups and Neo-Nazi active clubs used accounts on a private messaging app to organize and incite users to participate in violent anti-migrant protests.
Private messaging and encrypted messaging platforms are often favored by fringe and extremist groups to coordinate real-world actions. The heightened privacy and anonymity offered by such platforms, and the ability for individual “chat groups” to serve as echo chambers for false and inflammatory narratives can provide ideal conditions for fringe and extremist beliefs to thrive.
Monetized Live streams used to broadcast anti-migrant violence
As anti-migrant violence raged across the country, recorded and live streamed footage of the public disorder, including attacks on public institutions and hotels housing asylum seekers were widely broadcast across mainstream social media platforms.
These streams predominantly consisted of bystander footage, and were broadcast by anti-migrant users, far-right activists, protest participants and other users dedicated to streaming protests across the UK. While the “Live comments” section of such streams featured comments from other users that glorified the public disorder and called for further attacks on migrant communities.

Examples of a user employing a Panoptic live stream to expand coverage of the public disorder.
Several streams employed Panoptic live streams featuring feeds from different platforms and different locations, in effect displaying multiple streams broadcast onto one screen further expanding their coverage of the public disorder across the country. The reach of such content was also boosted by users employing third-party multistream services (a software that allows users to publish live streams to multiple platforms) to simultaneously broadcast across several platforms.

Comments under such live streams glorified the violence and called for further attacks.
Analysts at Resolver also discovered multiple instances of far-right groups on a private messaging app promoting monetized accounts on mainstream video platforms, revealing coordination efforts across these platforms. These accounts live streamed various anti-migrant demonstrations taking place around the country. There is a possibility that such accounts may be affiliated or supported by such far-right groups.

Examples of posts by far-right groups on a private messaging app that promoted monetized livestreams hosted on mainstream platforms.
Resolver analyzed the subscribers and views gained by three such monetized accounts that were uploading graphic footage of the public disorder between 27 July and 12 August 2024. This analysis revealed that all three accounts exhibited an average of 28% growth in views, and 21% growth in subscribers over the examined time frame.

Graphs showing the distribution of views and subscribers gained by the monetized accounts between 27 July and 12 August, 2024.
The use of third-party donation services present in the video description and comments under such live streams can also provide such actors another pathway to profit from proliferation of this content.
Short-form video clips depict graphic clashes
Over the same time period, Resolver observed a surge in short-form video content that featured graphic violence depicting clashes between protestors and police and incidents of looting and arson targeting hotels housing migrants. This included posts that glorified such attacks using tags such as “enough as enough”, describing the hotels as “pedo-Jihad hotels” and calling migrants living there “spongers”.
Other short-form clips shared on social media depicted Muslims or non-white Britons conducting violent attacks on white people. This content was also used to incite other right-wing users and presented as evidence of apparent “two-tier” policing, a conspiratorial narrative that alleges the British political and judicial system is biased in favor of non-white immigrants.
Such content also featured descriptions referencing “Muslim Defense League” a term used to refer to self-defense groups formed by Muslim communities and “migrant gangs” assaulting citizens with machetes while being ignored by law enforcement.

Other short-form video clips promoted allegations of MDL groups attacking white people.

Examples of short-form videos claimed that violence carried out by minority communities was being ignored by law enforcement.
Comments under short-form videos depicting violent clashes between protestors and police hosted on social media platforms platforms include hate speech and threats to violence, including under seemingly unrelated videos that do not violate community guidelines on platforms.

Users employed dehumanizing language in the comments under non-violative short-form videos hosted on a mainstream platforms.
The short duration, visual nature, and shareability of these videos make them ideal vectors for spreading disinformation and inflammatory imagery at tremendous velocity. When such content is further algorithmically elevated into feeds and recommendations, it can rapidly engage a mass audience and trigger real-world harms.
Conclusion
The rapid cross-platform spread of disinformation and graphic content during fast-moving, emotionally charged events poses major challenges for content moderation teams. In many cases, false narratives around the attacker’s identity and motives spread quicker than platform moderators could act. Far-right actors exploited a porous information ecosystem, leveraging mainstream platforms’ reach, fringe sites’ lax moderation, and private messaging apps’ encryption to incite violence and spread disinformation while evading detection from platform moderators.
According to Henry Adams, Trust and Safety – Partnerships and Strategy at Resolver, a Kroll business, “so-called “mainstreaming” sits at the heart of the Southport disorder” with “the language of well known far-right communities repeated, practically verbatim, across much of the online discourse”. He adds that “by repeatedly tracing these narratives back to the extreme communities that seek to manipulate the public consciousness, we continue to show it up for what many politicians have called it: thuggery, and hateful thuggery at that. But hate that has nonetheless cut through”.
Our Platform Trust and Safety solutions offer partners a fully managed service designed to enhance community safety with integrated content, bad actor detection and rapid alerting to the latest emerging threats including those emerging through live stream and short-form video formats.