For 17 years child safeguarding defined my career in law enforcement. It was a relentless, emotionally raw, and simultaneously rewarding mission. Today, that enduring purpose is overshadowed by a terrifying reality: despite significant global efforts, the vast majority of child sexual abuse material proliferates unseen, an insidious echo in the digital machine that continues to haunt countless victims.
I vividly recall one case that stretched over three long years. When a guilty verdict was finally read, overwhelming relief washed over me, and I broke down. The immense journey left a weight that still compels me. I can only imagine the trauma those victims endure, a trauma that fuels my mission today. That deep-seated desire to prevent another child from walking that path remains my core motivation.
In my role at Resolver, this mission feels more urgent than ever. The online proliferation of Child Sexual Abuse Material (CSAM) has reached an unimaginable scale. It’s a deluge, a digital shadow stretching across the globe:
- In the UK alone, the Internet Watch Foundation receives a report every 74 seconds.
- A sobering report from the Financial Action Task Force, citing research from the University of Edinburgh, estimates that approximately 300 million children globally, or one in eight, are affected by online sexual abuse and exploitation.
These staggering numbers far surpass identified victims or catalogued files, proving that vast quantities of abuse are never even captured.
For years, our primary weapon in this fight focused on the known. I witnessed the development of the UK’s Child Abuse Image Database (CAID). It was a revolutionary step forward, a gamechanger. CAID dramatically reduced the re-victimisation of children, delivered efficiencies to tackle the scale of the problem, and streamlined court proceedings. CAID was undoubtedly instrumental in catching thousands of offenders by matching known images and videos.
However, the core operational strength of these databases is also their fundamental limitation. Their power to efficiently manage and cross-reference millions of media files is constrained to what is already known. Novel CSAM passes through undetected.
Offenders are smart. They know how to slightly alter an image to create a new digital fingerprint, or ‘hash’, rendering it invisible to even the most sophisticated perceptual hashing algorithms. As detection tools evolve, so do circumvention techniques. A strategy focused solely on refining known-match technologies will ultimately fail.
The true limitations of this approach were seared into my memory by a single case. An individual, identified through an undercover operation, was found to be abusing his own grandchildren. For 30 years, he lived a respected life in his community while secretly committing the most depraved acts. On his devices, we discovered hundreds of thousands of images and videos. None were in CAID. None were known. For three decades, he had operated with impunity.
That discovery brought home the terrifying reality: what CAID could see was merely the tip of a colossal iceberg. This is reinforced by INHOPE’s finding that 84% of material encountered by its hotlines is “new” – content that bypasses our defences simply because it has never been seen before.
This is where the fight becomes deeply personal, not for those working the cases, but for the victims. I’ve had countless conversations with survivors and their families who ask the one question we can never properly answer: “Can you get the images taken off the internet?”
The honest answer is, bluntly, no. Once shared, an image is practically impossible to stop from being re-shared. This reality inflicts a unique and perpetual form of trauma. For a victim, there is no peace, no closure. There is only the gnawing fear that tomorrow, or next year, or in ten years, the worst moment of their life could reappear on a screen, shared by a stranger. Their trauma is not a memory; it’s a digital ghost, an echo in the machine that never fades.
This status quo is neither sustainable nor morally acceptable. We cannot continue to chase shadows. We cannot build our entire child protection strategy on a foundation of reaction. Every day an image of abuse persists online, unidentified as CSAM, is another day an offender operates and another day a victim lives in fear.
The goal must be a world where a child’s trauma never becomes viral content; a world where we identify and stop the spread of unknown abuse material before it retraumatizes a victim. This is the next frontier in child safeguarding. It’s a monumental technological challenge, but more than that, a moral imperative. For every child who has suffered, and for every child we can still protect, we must do more: we must start seeing the unseen.
This monumental challenge demands innovative solutions. At Resolver, we are committed to turning the tide against child exploitation by enabling us to see the unseen. Learn how our groundbreaking Unknown CSAM Detection Service, powered by the Roke Vigil AI CAID Classifier, is designed to detect and classify unknown CSAM at global scale.
About the Author: Jon Best is Vice President of Human Intelligence at Resolver, formerly Crisp. Before joining Resolver, he spent over 16 years with West Yorkshire Police, specialising in child safeguarding, intelligence, and complex investigations. Jon now leads efforts to advance technology and human expertise to better protect victims and prevent exploitation in the digital age.