What We Call Threats: Evolving Taxonomies and the Role of Regulation

Oliver Clements
Lead Risk Detection Engineer, Resolver Trust & Safety
Dani Williams
Dani Williams
Lead Subject Matter Expert, Child Endangerment, Resolver
· 5 minute read
Illustration of a network diagram on a browser window with connected icons representing online threat taxonomies, next to a gold badge that reads "celebrating 20 years of online safety".

This is the second installment in our series, “20 Years in Online Safety: Reflecting, Evolving, and Adapting,” leading up to the 20th anniversary of Resolver’s presence in the Trust & Safety community on Nov. 25, 2025. Originally launched as Crisp Thinking in 2005, Resolver has seen multiple generations of online safety professionals carry its mission forward. For this series, we asked our team and leadership to reflect on the journey so far and the road ahead as we continue our mission to protect children online.


In the ever-shifting landscape of the internet, what we define as an “online threat” is constantly evolving. From the early days of viruses and phishing scams to today’s concerns of algorithmic manipulation and peer-on-peer abuse, our understanding of harm in online spaces has deepened and our response to it has had to keep pace.

As digital risks evolve, so too must the online threat taxonomies we use to identify and act on harm. This blog reflects how Resolver’s own taxonomy of risk has evolved over the last 20 years, and how regulation has shaped both our understanding and our response. This blog traces how Resolver’s framework has adapted across two decades of regulation, innovation, and online safety challenges.

How online threats have evolved over the last 20 years

In the early internet days, online threats were largely technical: malware, hacking, and phishing dominated discussions about technical security and safety. But as platforms were created and the internet became a series of social spaces, harm became more human.

The shift was gradual but significant. Today, threats are no longer defined just by code, but by behavior:

  • Harms to children, such as grooming and child sexual abuse material (CSAM)
  • Peer-on-peer abuse, including sextortion and harassment among minors
  • Disinformation and manipulation, especially during elections and public health crises
  • Algorithmic harms, like the amplification of hate speech or harmful content

These shifts reflect broader social, technological, and political changes. They didn’t just change the risks we monitor. They changed how we classify, prioritise, and act on them. What was once seen as a fringe concern is now central to online safety policy.

How taxonomies help categorize online harm

In online safety, definitions shape action. And definitions start with taxonomies. For two decades, Resolver has built taxonomies — systems for classifying and labeling the risks our analysts and AI detect. A good taxonomy shapes everything, from policy enforcement decisions to reporting frameworks and client transparency.

Over the years, our definitions have evolved from narrow lists of illegal content to nuanced, socially informed categories of harm. What counts as “threatening” has expanded to reflect lived experience, not isolated platform policy. We have consistently refined our categorization of risks to reflect the rapidly evolving risk landscape.

This process is driven by close collaboration with our clients and leading NGOs, ensuring that our framework remains current, relevant, and aligned with emerging threats and regulatory expectations. By integrating real-world insights and expert perspectives, we maintain a dynamic and responsive approach to risk classification.

The impact of regulations on threat definitions and taxonomies

Governments, platforms, and researchers use taxonomies to classify threats. These frameworks help define what counts as harmful, guide enforcement, and shape public understanding.

In recent years, three landmark laws — the UK Online Safety Act (OSA), the EU Digital Services Act (DSA), and the Australia Online Safety Act (AOSA) — have reshaped how platforms define, detect, and disclose harm. Many others are emerging globally.

Each has its own framework, but all share a common goal: to hold online platforms accountable for the safety of their users. Regulation demands precision and transparency, with clear and consistent understanding of what’s harmful and why.

  • The UK Online Safety Act (OSA) was passed into law in October 2023 with the mission statement to “make the UK the safest place to be online by regulating platforms that host user-generated content.” It heavily focuses on platforms ensuring the active moderation of harmful content to protect users, particularly children.
  • The EU Digital Services Act (DSA) became fully enforceable in February 2024. It aims to align digital regulation across the EU and increase platform accountability in moderation, advertising, and algorithmic decisions.
  • The Australia Online Safety Act (AOSA), which became effective in January 2022, sets out industry-wide codes for illegal and restricted content.

But tensions remain. Technical definitions often fail to capture social impact. For instance, a post may not breach policy but still cause harm — especially to vulnerable users.

Why Resolver created a unified Trust & Safety taxonomy

In response to the sweeping legislative change and evolving risk landscape, Resolver created our first unified Trust & Safety taxonomy, our most comprehensive framework to date. With this, we ensure our clients can define and enforce their policies accurately and effectively in line with new legislative requirements.

The methodology behind the creation of the unified taxonomy took two key approaches:

  • Legislative alignment: We meticulously reviewed all legislative acts to ensure that all risks referenced were added. This process also involved our subject matter experts, drilling into each risk, defining what the risk entails in a real-world sense, and — where appropriate — expanding on and adding more granularity to risk types. This ensures that our clients have the flexibility to moderate and action content appropriately and to glean the most amount of information possible from their data.
  • Empirical review: An in depth review was conducted across the entire amount of classifications our automation and human analysts had placed within our partners services over a 12-month period. This review allowed us to leverage the millions of items we have accurately classified for our valued T&S partners and ensure that important risks that sit outside of the legislation are still covered within our taxonomy.

The result is a living framework that uses shared language across the key pieces of online legislation across jurisdictions. This decision has allowed us to create a taxonomy that has a shared vocabulary, that is relevant and understood across all territories, and creates a common understanding of what online risks are globally.

By unifying the language we use to define and label risk, our new T&S taxonomy enables us to assist partners in navigating the differing definitions of illegal and harmful content and compliance mechanisms across jurisdictions. It also enables us to facilitate greater possibilities to share intelligence and knowledge across regions and industries to help combat bad actors online and bridge the legal, technical and human perspectives of harm.

Challenges in harmonizing global safety standards 

Regulation plays a critical role in formalizing threat definitions. It compels platforms to act, sets standards for transparency, and creates accountability.

However, challenges persist. Threats are ever-evolving and keeping pace with emerging threats like AI-generated content is challenging. There are also global inconsistencies in legal frameworks. What might be illegal or harmful in one country is not necessarily considered illegal or harmful in another. Despite these hurdles, regulation has driven major improvements in how platforms detect, report, and respond to harm.

The impact of the new online safety laws is strongly being felt across the industry, prompting a real-time reshaping of the online landscape. While major platforms are adapting to rising compliance standards, some fringe and high-risk platforms have chosen to withdraw from regulated markets rather than align with new safety and transparency expectations. The seriousness of this shift is demonstrated by the significant fines already issued for noncompliance and the ongoing regulatory scrutiny, including 47 live Ofcom investigations. Furthermore, companies operating in the EU are now submitting transparency reports to regulators, detailing their content moderation decisions and the role AI plays in their safety approach.

Emerging threats: What next for online safety?

Looking ahead, new threats are already on the horizon:

  • AI-generated content and deepfakes are becoming more and more sophisticated
  • Behavioral manipulation, including nudging and dark patterns
  • Cross-platform and cross-border harms, which challenge enforcement

To address these, we need dynamic, adaptable taxonomies that reflect lived experience, not just technical violations. That’s why our taxonomy isn’t static. At Resolver, we have a dedicated team who make regular updates to the risks and definitions in our taxonomy to ensure we provide our partners with the latest, most up-to-date coverage in a constantly evolving online risk landscape, and we ensure continued compliance with new and amended legislation.

Because in this space, today’s taxonomy can’t protect tomorrow’s user.

Rethinking threats: A call to action

Definitions matter. They shape policy, enforcement, and public understanding. As threats evolve, so must our frameworks — and our regulations. We need inclusive, child-centered, and context-aware approaches that recognize the complexity of online harm.

At Resolver, we see taxonomy not as compliance, but as collaboration — a shared language of safety that helps platforms, regulators, and experts respond faster and smarter to the risks that matter most. After 20 years of evolution, one thing remains constant: the way we define a “threat” determines how — and who — we protect.

As we reflect on two decades of protecting children and safeguarding online spaces we’re also looking ahead to the next frontier of Trust & Safety: the proactive, intelligent elimination of CSAM.

Resolver’s new Unknown CSAM Detection Service represents the culmination of 20 years of learning, evolving, and purpose. It’s built to identify, prevent, and remove child sexual abuse material at speed and scale, while protecting humans behind the screen.

Learn more about how we’re redefining child safety for the next generation.

 

Table Of Contents

    Speak to an Expert

    By clicking the button below you agree to our Terms of Service and Privacy Policy.
    If you see this, leave it blank.