COITP: How predators hide in plain sight

Resolver
· 4 minute read

The proliferation of Child Sexual Abuse Material (CSAM) is one of, if not the most, damaging forms of online harms enabled through online interactions. Predators engaging in the distribution of such content are continuously evolving their tactics in a bid to acquire, consume and distribute harmful content while evading identification and enforcement from the platform and the authorities.

One popular tactic employed by online predators that allows them to skirt platform moderation, leverage large volumes of legal content uploaded by users across major social media platforms and “curate” this material for consumption by the offender  and a wider community of predators online. Common examples of such legal content also known as ‘content of interest to predators’ (COITP) include everyday images and media depicting minors in a wide variety of sporting settings taken from innocent online photo albums uploaded by users across social networking platforms. COITP also acts as a gateway to more egregious harms including the targeted grooming of minors frequenting social media platforms.

This methodology exploits gaps in the trust and safety policies enforced by mainstream and alt-tech platforms allowing malicious networks engaging in the distribution of such harmful content to operate across a diverse range of social media platforms, content hosting, and private messaging services. 

Curating COITP

The “curation” of COITP represents the one of many online harms targeting minors congregating in online spaces.

COITP largely comprises legal content that poses little to no risk when considered in isolation, in fact, even when curated in volume, such content is likely to remain undetected by most users frequenting social networking sites. As a result, assessing individual pieces of content is no longer effective in the identification of these types of predator networks. Instead, moderators should focus on collections of content, and the processes exhibited by these groups around collation and distribution.

According to John-Orr Hanna, Chief Intelligence Officer for Resolver, a Kroll business, ‘as platforms continue to incrementally improve their ability to identify and remove known and unknown CSAM, Resolver has noted an increase in offenders curating large collections of legal imagery of children.’ COITP is rarely created by the predators themselves, allowing malicious networks engaging in its distribution to hide in plain sight while still operating across a diverse range of online communication, networking, and content hosting services.

The proliferation of COITP occurs ‘wherever there is an open door’ says David Hunter, VP of Platform Trust & Safety at Resolver adding that ‘bad actors have been observed to create and consume these collections on any platform which allows such behavior. This includes content curation platforms, file storage, video sharing platforms (VSPs), forums and even open storage buckets belonging to general blogs and websites.’

Curated for consumption blog image 1

Gateway to more explicit harms

The threat posed by COITP is further magnified given a growing body of research that suggests the existence of such content acts as a gateway to more egregious content and offline harms.

In particular, John-Orr adds ‘whilst the CSAM element (as defined in law) is critical it is also important for platforms to take a wider view of the problem space and focus on those users who are exploiting platform features to collate content for consumption. In many cases these COITP signals and behaviors often indicate wider intent’.

Curated for consumption blog image 2

Cover page for report investigating CSAM user behavior on the dark web. (Source: Protect Children)

In September 2021, Protect Children, a Finnish NGO conducted a series of surveys on the dark web that sought to gain insight into behaviors of individuals who consume CSAM imagery. This analysis revealed 52% of respondents claimed to have felt afraid that viewing CSAM might lead to sexual acts against a child. A further 44% respondents said that viewing CSAM made them think about seeking direct contact with children, while 37% claimed to have sought direct contact with children after viewing CSAM.

The organization went on to conclude ‘while a direct causal connection between CSAM use and direct offending cannot be drawn’, the results ‘indicate a significant correlation between the two’ adding that ‘this is especially the case if certain predisposing facts exist’.

Creating communities of predators

When a predator discovers an image of a child that brings them sexual gratification, they are often able to find other offenders by reviewing the comments, likes, and follower/subscriber list or other platform features associated with the post or account.

COITP imagery represents a significant threat to the privacy of individual users frequenting popular social networking sites. By amplifying innocent images of users as part of their curated collections, predators are driving large volumes of views and engagement to images and media that were originally intended for very small and selective audiences.

The distribution of this content can also lead to the doxxing of child victims exposing them to a wider audience of malicious actors operating online. Finally, by leveraging everyday photos intended for family and friend sharing , predators can exploit online communication and networking platforms to gain a “shop window” into the normal family lives of others for their own sexual gratification.

Augmenting capabilities with Gen AI

The widespread accessibility of generative machine learning models (Gen AI) have also augmented the capabilities of predators to generate COITP and other harmful imagery such as CSAM.

In particular, Resolver has identified several instances of offender networks sourcing images and videos of minors from social media to create a database that is subsequently used to generate harmful synthetic media for their personal consumption. The availability of tutorials that teach new users how to exploit image generator services and create bespoke Low-Rank Adaptation models (LoRA) for generating COITP on mainstream and alt-tech social networks have exacerbated this trend, spawning a thriving open source community of users dedicated to expanding and refining such capabilities over the long term.

Curated for consumption blog image 3

Example of post on by a dark web forum that discusses the use of Gen AI technology to produce bespoke synthetic imagery of minors. (Source: Resolver)

The increasing sophistication and use of Gen AI technologies by malicious actors to generate synthetic COITP presents a significant risk for platform officials, particularly as offenders exploit loopholes in the current trust and safety processes adopted by most mainstream platforms governing what constitutes violative generated content. 

Conclusion

Addressing the challenge posed by the curation of COITP will necessitate trust and safety and law enforcement professionals to upgrade their technical solutions and place greater emphasis on conduct over content in their platform enforcement strategies  Rather than looking at individual images or video in isolation – there is a need to take a more holistic view of community behavior to identify harm.

Given that most mainstream platforms do not currently prohibit the curation of content,which falls under the legal CSAM threshold, especially content that is not explicit, addressing the proliferation of COITP will require moderators to move beyond reviewing the “severity” of the content in isolation. Instead, their analysis should emphasize insights drawn from network and human intelligence that examine the curators’ intent.

Resolver’s Platform Risk Intelligence product employs a human assisted AI approach combining AI and Machine Learning algorithms trained since 2005 to track online risk signals with network and human intelligence  This approach assesses, analyzes and triages potential threats. Our analyst-in-the-loop approach ensures that our partners are always the first to know and first to act on the highest impact risks to their platforms. 

Table Of Contents

    Speak to an expert