The Human at the Heart of the Machine: A 20-Year Lesson in Online Safety

Jon Best
Vice-President, Human Intelligence, Resolver Trust & Safety
Frances McAuley
Frances McAuley
Director of Product, Resolver Trust & Safety
James Preston
James Preston
Director of Engineering, Resolver Trust & Safety
· 3 minute read
Illustration representing resolver’s 20 years of online safety, showing a gold badge with text and a golden heart on a computer chip symbolizing care through technology.

This is the third installment in our series: “20 Years in Online Safety: Reflecting, Evolving, and Adapting,” leading up to the 20th anniversary of Resolver’s presence in the Trust & Safety community on Nov. 25, 2025. Originally launched as Crisp Thinking in November 2005, we’ve had multiple generations of online safety professionals carry the mission forward. For this series, we asked our team, managers, and leadership to reflect on the journey we’ve taken so far and the road ahead, as we continue in our core mission to protect children online.


Twenty years ago, Resolver was founded on a mission we thought we understood: protecting children from predators online. We entered the field as pioneers, building the first digital shields to guard against a tangible, devastating threat. We were fighting a known enemy.

But the last two decades have taught us that the landscape of online harm is a fluid, evolving entity. The battleground shifted under our feet. We witnessed the rise of something we never envisioned: the weaponization of social platforms to foster ideologies of self-harm, where bad actors actively encourage vulnerable young people to take their own lives.

As we mark our 20th anniversary in the industry, we’re not just celebrating longevity. We are reflecting on the single most critical lesson we have learned in the fight for online safety: technology is the weapon, but humanity is the intelligence, the conscience, and the core of any effective strategy.

Online safety 20 years human at the heart quote two

From manual rules to intelligent tools

In the beginning, our work was a manual, painstaking art. Our first line of defense was the human mind, meticulously crafting thousands of tailored rules to find specific patterns in text. This foundational work was effective for a time — a digital game of cat and mouse against individuals. But the nature of harm is to adapt.

We saw it hide in plain sight. An innocuous acronym, “CTB,” began to appear. To an automated system, it was meaningless noise. But to a vulnerable person and the network targeting them, it was a sinister code: “catch the bus,” a goad to commit suicide. This was a turning point. It proved that static rules were not enough. We realized we couldn’t just build a higher wall; we had to build a smarter shield.

This began our second decade, a chapter defined by enrichment and innovation. We invested heavily in machine learning, image analysis, and large language models. This shift allowed us to enrich the content we reviewed with contextual intelligence, going beyond surface-level signals.

These tools didn’t replace our analysts — they became a force multiplier. They empowered our teams to see deeper, act faster, and understand threats across linguistic and cultural boundaries. This fusion of human expertise and technological power became our core philosophy.

A deeper definition of safety

Detecting harm is only one part of the equation. True safety requires a more systemic, more compassionate approach. We learned that playing whack-a-mole with harmful content was a losing battle. We had to go further. Our strategy evolved to focus on two key principles. First, to move from mere takedowns to active support, ensuring that every intervention also also signposted vulnerable individuals to life-saving help. Second, we shifted our focus from individual pieces of content to the actors behind the risk. This meant understanding who they are, where they operate, and the networks they form.

This actor-centric view, powered by actor-level aggregation, allows us to see the bigger picture. It enables us to move beyond simply removing a single post and instead dismantle entire networks, disrupting their ability to cause harm at scale.

This work carries a heavy human cost. The psychological toll is not confined to the front-line analysts reviewing harrowing content. It extends deep into our organization, affecting the software engineers who build detection models and the risk architects who design the logic to intercept harm.

We knew we had a profound responsibility to protect our protectors. That’s why we built a comprehensive well-being program — a three-pronged approach of preventative hiring, proactive resilience training, and reactive access to mental health professionals for those who need it.

Our vision for the next 20 years

If two decades on the front line have taught us one thing, it is this: an algorithm cannot, on its own, understand the sinister nuance of a meme or the coded danger in a new slang term. It cannot, on its own, care.

This is where the future of trust and safety lies. The only path to truly effective AI is through human expertise. Technology provides the scale, but it is our dedicated analysts — people who are deeply knowledgeable and care deeply — who identify emerging threats and provide the critical ground truth needed to teach our systems. An AI is only as effective as the people who train it.

Online safety 20 years human at the heart quote one

This human-centered approach to technology is the lesson we have carried through 20 years of change. It is the core of Resolver’s philosophy, and it is the principle that will guide us as we lead the development of truly intelligent, compassionate safety systems for the next 20 years.

As we reflect on two decades of protecting children and safeguarding online spaces we’re also looking ahead to the next frontier of Trust & Safety: the proactive, intelligent elimination of CSAM. Resolver’s new Unknown CSAM Detection Service represents the culmination of 20 years of learning, evolving, and purpose. It’s built to identify, prevent, and remove child sexual abuse material at speed and scale, while protecting humans behind the screen.

Learn more about how we’re redefining child safety for the next generation.

 


More from Trust & Safety’s 20th Anniversary series:

  1. Two Decades of Protection: Resolver’s Constant Evolution in Online Child Safety
  2. What We Call Threats: Evolving Taxonomies and the Role of Regulation
  3. From “Chicken Soup” to Catastrophe: The Dangers of an English-Only Trust & Safety Model
Table Of Contents

    Speak to an Expert

    By clicking the button below you agree to our Terms of Service and Privacy Policy.
    If you see this, leave it blank.