The Challenges of Moderating Mis-and Disinformation Online: A Threat-Focused Perspective

Richard Stansfield
Richard Stansfield
Lead Subject Matter Expert Mis-and Disinformation at Resolver, a Kroll Business
· 5 minute read
Mis-and-disinformation, content moderation, trust and safety

The risk posed by mis-and disinformation is more complex than is often appreciated. What may initially appear to be an innocent conspiracy or insignificant departure from the truth, can often be the trigger for more dangerous narratives to take root through persistent exposure, driving radicalisation towards manufactured scenarios.

The long and slow erosion of the online information environment leaves social media users exposed to potentially problematic content in such a way that the target audience risks being radicalised to believe false truths over accurate ones. Particularly at a time when increasing amounts of false information are available to social media users, the risk of harm from mis-and disinformation has never been greater.

This raises the question: is simply removing this content from view really the best course of action? And how can trust and safety efforts be structured to limit harm without creating new vulnerabilities?

Regulatory complexity and moderation gaps

Recent changes in trust and safety approaches towards content moderation combined with the implementation of the Online Safety Act (OSA) in the UK and Digital Service Act (DSA) in the European Union (EU), leaves social media at odds over how to approach potentially harmful false information. This complication leaves users exposed to mis-and disinformation while regulators and Trust & Safety teams reconsider how best to deal with the spread of such content across their platforms.

Ultimately, this requires a robust and approachable methodology which provides a clear, and accessible view of the facts, which users and platforms can utilise as part of their strategy for reducing harm induced by mis-and disinformation. Easily digestible factual information therefore plays a cornerstone role in how Trust & Safety practitioners can help reduce the associated risk of harm from false information.

Resolver’s  trust and safety risk intelligence work has shown that individuals who intentionally spread harmful content adjust their tactics to evade content moderation, for example, by couching falsehoods in opinion or satire, or by shifting platforms when enforcement tightens.

Addressing limitations with moderation and regulation

Reductions in platform centric content enforcement practices in favour of community driven moderation can be helpful in leveraging a user base to quickly respond to mis-and disinformation risks. However, this approach also comes with a number of concerns such as potential bias, manipulation, scalability, privacy risks and burnout. There is also a risk of community-driven moderation being ‘gamed’. Resolver have witnessed community labelling being used in attempts to have content removed with knowingly false information.

Since coming into force in March 2025, the OSA has created a “false communications offence”, for sending messages or posting social media content which conveys information that is known to be false, and if there is intent to cause non-trivial harm. This potentially leaves large swathes of false information in a grey area of how regulations may be enforced by platforms.

According to a UNESCO study in 2024, as many as two thirds of influencers surveyed revealed that they do not fact check information before sharing it, instead using engagement and popularity metrics to determine what they share.

Although many of those polled noted they would be open to receiving training on how to spot mis-and disinformation, it outlines a potential issue with how large swathes of content may slip through regulatory action or content moderation. Both intentional and accidental disinformation spreaders are then left with an opportunity to exploit platforms to spread potentially problematic content without recourse.

Understanding how mis-and disinformation exploit reach and trust

Resolver’s analysis shows that social media users who intentionally spread harmful content online, are acutely aware of the policies they are up against and moderate their behaviour to evade content moderation, ensuring they retain their often large audiences. In this way, large volumes of content from both spreaders of intentional and accidental false information are likely to fall through the cracks unless they meet specific criteria.

One approach is to increase access to information and specific education to fill potential trust and safety policy or regulatory action to combat the spread of mis-and disinformation more holistically.

Emphasis on education, and putting the onus on users to spot misinformation, is a model adopted in Finland, which places media literacy as a core curriculum subject from the age of six. There is evidence to suggest that this is a positive, and certainly proactive approach. In 2023, Finland ranked highest in the European Media Literacy Index – but whether this demonstrably counters misinformation taking root at scale is under-researched.

How ease of access and understanding can influence the spread of mis-and disinformation

The COVID pandemic taught us that despite regular communication and provision of information to the public, access to information is not always the easy solution to the problem of disinformation, particularly where information may not be easily understood or accessible.

Often, particularly with political or medical information, when accurate information is provided to the public, it is done in a way that may not be easy to interpret by the lay person and often leads to misunderstanding or misinterpretation, a perfect position for mis-and disinformation to proliferate in.

As a result, social media users often favour social media and influencers as a source of news due to a perceived ease of access and understanding when compared to more complex sources of information. A recent Pew survey of news consumption habits in the US found that one-in-five Americans said they regularly got their news from influencers on social media.

Similarly, in 2024, just over half of UK internet users reported that they used social media as their primary source of news. Conflicting data suggests that trust in influencers varies by subject, where as many as 69% of people trust influencer product recommendations above those of brands, as few as 15% trust influencers generally.

Mis-and disinformation, content moderation, trust and safety

These insights are further reinforced by polling data collated by IPSOS in 2024 that found that general trust in influencers differs greatly by generation, with the younger generations generally believing influencers more often. This data of course doesn’t account for influencers willingly attempting to mislead their audiences through conspiracy theories, unproven statements or ideological disinformation, but suggests that as many as 8 million of the UKs internet-using population may be susceptible to believing false information from influencers alone.

Mis-and disinformation, content moderation, trust and safety

In historic cases of intentional disinformation spreaders, there is evidence that deplatforming high-profile individuals can sometimes galvanize their existing audiences, leading to migration to alternative platforms with less oversight.

In the case of the former owner of InfoWars, Alex Jones, removing him from mainstream platforms galvanised his already committed audience and drove them off platform to his new service Banned.Video, where he is free to operate without platform oversight and content moderation, enjoying access to the substantial audiences he was able to grow on mainstream platforms before being blocked from using them.

A Resolver review of online traffic driven to Alex Jones’ website between January and March 2025 revealed it averaged 2.1 million visits per month over the examined time frame, with almost 80% of this traffic originating from direct searches for the website predominantly from users in US, Australia, Germany, Canada and the United Kingdom. This highlights a recurring threat pattern: when disinformation is pushed off mainstream platforms, it often re-emerges in less moderated environments – where it may become more extreme and harder to monitor.

Mis-and-disinformation, content moderation, trust and safety

Trust & Safety partners, such as Resolver, are able to work with platforms, regulators and audiences to understand what mis-and disinformation looks like, how it is used to deceive and how to detect it day to day. In this way, the influence of disinformation becomes weaker and less compelling and the associated harms driven by deception can be reduced significantly.

What is the solution?

There is no single solution – and no single actor can address the problem alone. Platforms, regulators, trust and safety professionals, educators, and communities all play a role in building resilience to disinformation.

It’s increasingly clear that platforms alone cannot shoulder the responsibility of determining truth online. With so many complexities associated with safety and regulation, education and awareness play a crucial role in how false information can be fought.

Over the long term, using information and digital literacy to teach people how words can be manipulated, how maths can be used to present false statistics, how art can be used to manipulate and how history can be manipulated or misrepresented, it is possible to inoculate social media users before they are subjected to false information.

Resolver’s comprehensive Trust and Safety Intelligence aids some of the largest social media platforms and service providers monitor and mitigate against the spread of mis-and disinformation. To learn more about how our human-in-the-loop methodology blends automated detection with threat intelligence drawn from a cross-disciplinary team of human experts please reach out.

Get free monthly risk and threat insights with the Resolver Safety Brief. Subscribe Today
Table Of Contents

    Speak to an Expert

    By clicking the button below you agree to our Terms of Service and Privacy Policy.
    If you see this, leave it blank.