Today, March 18th, is National Child Exploitation Awareness day in the UK. The day, first introduced in 2014 by the National Working Group Network (NWG), aims to encourage people to “think, spot and speak out” against abuse and adopt a zero-tolerance approach to adults developing relationships with children or children exploiting and abusing their peers.
The NWG, which was founded in 2008, is a UK based charity that works to combat child sexual exploitation (CSE). The organization was created when awareness of child exploitation online was low, and social media was still in its infancy. Facebook had reached 100 million users, Twitter (as it was formally known) and YouTube had only just launched and Instagram was still 2 years away, TikTok 8 years off, while regulations governing such platforms were also in their design infancy.
Understanding of child exploitation at this time was much more localised to your immediate community; every school provided “Stranger Danger” lessons, and parents warned their children against accepting sweets and treats from people they did not know.
Thankfully, 17 years later awareness has grown, but in the 11 years since the first National Child Exploitation Awareness day, unfortunately so has the scale of abuse. The proliferation of the internet, and ubiquitousness of social media platforms, have given predators unprecedented global access to children, making law enforcement efforts that much more challenging.
The Evolution of Child Exploitation Online
Whilst the technological differences between 2008 and 2025 are vast, awareness of child exploitation online has also grown; terms such as ‘CSAM’ (Child Sexual Abuse Material) and ‘predator’ have entered everyday language, and global media outlets are highlighting these issues – “The Devil in the Family: The Fall of Ruby Franke”, a documentary on a streaming service detailing how a mother abused her children to become an influencer has recently shone a powerful spotlight on the problem.
Whilst technology is continuously changing, the essence of child sexual abuse has not. Predators, whether online or not, have always existed, and their grooming tactics remain consistent. The rise of social media has only changed the vehicle of predation. It’s about power, it’s about manipulation and it’s never the child’s fault.
The development of technology has amplified everything, expanding offenders’ access to children, increasing the scale of abuse but also increasing the intelligence opportunities to identify offenders, prevent offences happening and prosecute offences where they do occur.
However, this is complex. In 2008, child exploitation online was much more localised, predators were offending in their communities, now they have access to children all over the world. We have global communities, global technology platforms and investigations spanning differing national legal jurisdictions and regulatory frameworks can become extremely complex. Prevention becomes ever more important.
This week, we see the Online Safety Act (OSA) come into force and, certainly in the UK at least, it mandates that platforms block, report and remove known CSAM – requiring prevention practices to be put in place. While a serious obligation, this only addresses previously identified content, leaving a gap as new CSAM is constantly created. This is a step in the right direction but it is the first step in a long journey.
Predatory Behavior: What Content Moderation Misses
Predators use online platforms to abuse children, collect and trade illegal imagery and connect with other offenders. They actively evade detection, manipulating platform policies to maintain their access. While all social media platforms prohibit child sexual abuse, and use technology to try to block CSAM, these tools only detect already classified material, making them only partially effective.
Research by Protect Children found that 29% of CSAM consumers encountered the material on social media platforms, and “high numbers of respondents reported having at some point felt afraid that they may, thought about or directly contacted children after viewing CSAM”. Content moderation helps block illegal content, with platforms using automated detection and human review to enforce guidelines. Community guidelines are created based on content, with the underlying question, “do we want this content posted on our platform”?
From these community guidelines, platforms have built detection capabilities and reporting mechanisms that mean content should be automatically detected and removed if it violates policies. Content that does not automatically qualify for enforcement is put before a moderator to be assessed for the most appropriate action – be that removal, fact checking etc.
This is the next barrier that platforms put up to prevent child sexual abuse and child exploitation on their platforms. This dual process results in the majority of violative content being removed and where that content is particularly egregious, the user’s account being terminated and reported to law enforcement. These capabilities work together to enforce the community guidelines each platform has.
Predatory users work very hard to understand how these systems operate and ensure they can maintain a presence on a platform without being detected by what they post. They want to stay on these platforms because that is where they can access children and content posted by or containing children. They have become increasingly adept at this and Resolver Trust & Safety Intelligence continuously monitors how they adapt to new detection capabilities or the implementation of new policies.
It is only right that CSAM and other egregious child abuse content is at the forefront of content policies but as predatory users adapt them it becomes a game of whack-a-mole and leaves a gap where predators are proficient at skirting community guidelines.
Whilst content based policies limit egregious content, an enforcement strategy premised on incorporating behavioral moderation will focus on understanding the conduct and context of a user. A simple and commonly employed methodology, for example, can include reviewing the number of moderation actions enforced on a particular user and assigning a rating or score for their account. Another is tracking user reports with a ‘three strikes and you’re out’ policy for accounts that are repeatedly flagged as problematic by the community.
Metric-based behavioural enforcement has an impact. However, there remains the behaviour that goes undetected or unreported. The most serious of these can be child-grooming. This behaviour is not always obvious to an untrained eye and is highly unlikely to trigger content moderation action as predators skilfully exploit veiled speech and other ‘tradecraft’ such as misrepresenting their age in order to engage with children.
Resolver repeatedly sees examples of grooming on social media and gaming platforms resulting in ‘off-siting’ where the contact migrates to a more private environment such as an encrypted messaging app. Reaching beyond reactive enforcement into proactively identifying dangerous behavioural signals is the gold standard in creating a safe and secure environment for children on a platform.
Predatory Behavior in Plain Sight: Content of Interest to Predators
Additionally, Resolver encourages platforms to consider what we refer to as content of interest to predators (COITP). This term refers to content that features children that is non-sexual, legal, and sometimes lifestyle content that is consumed by predators for sexual gratification.
The vast majority of this content is innocently created by parents and children and posted in public spaces on social media platforms. Resolver has observed predators requesting children make content that is non-egregious but that they consume for sexual gratification, potentially avoiding moderation efforts.
The most collected content includes images and videos of children doing gymnastics, playing in swimming pools, at the beach, in their school uniforms and interacting with adults. Our analysts observe predators consuming, curating and sharing large volumes of COITP content among predator communities across social platforms for the purposes of sexual gratification.
We also identify predators engaging with children, making suggestions as to how they can make subtle changes to their content to increase their view count, followers or other online engagement. They often then move these conversations to encrypted messaging platforms where more egregious solicitation and grooming can take place in a lower risk environment for the predator.
Given that most mainstream platforms do not currently prohibit the curation of COITP, especially where that content is not explicit, addressing the predatory consumption of content depicting children requires an intelligence led, cross platform approach to detection and enforcement. COITP acts a ‘gateway’ for predation and speaks to the importance of behavioural detection alongside content moderation in keeping a platform safe for children and disrupting efforts by predators to establish seemingly benign relationships to enable abuse elsewhere.
A Global Problem Requires a Global Solution
Resolver provides partners with cross platform, behavioural-based intelligence assessments of the potential harm caused by users identified as posing a risk to children. This enables a holistic assessment of risk to take place and, where policies are in place, ensures those users are removed from the platform.
We monitor for signals of child predation in whatever form it comes and provide robust feedback on the efficacy of a platform’s technical capabilities in addressing both content and behaviour of concern. Crucially, we look beyond the aperture of law enforcement and identify and report on worrying indicators of risk such as user seeking and assembling COITP which is problematic in itself and can be a precursor to an even more serious risk to children.
If an offender in one country can abuse a child in another country, we are all invested in solving this problem. We need global solutions to a global problem. As Akim Dev, director of the documentary “The Children in the Pictures”, said in his speech to the UN, “it takes a village to raise a child, but it’ll take a global village to keep that child out of the pictures.”
Enhancing Online Safety with Resolver
For platforms, Resolver provides in-depth and holistic Trust and Safety Intelligence that is ideal to mitigate against the misuse of their platform or service predators, and those looking to exploit children.
Our human-in-the-loop methodology blends automated detection with threat actor intelligence drawn from a team of experienced human analysts ensuring our partners are always the first to know, and first to act. while our forward-looking insights can support your Trust and Safety teams stay compliant with online safety legislation enforced in 2025.