From sextortion to generative-AI: emerging threats targeting minors online in 2023

Resolver
November 2, 2023 · READ

Digital threats to children online are escalating, according to the 2023 Global Threat Assessment report released by the WeProtect Global Alliance. The report highlights an alarming increase in the scale and complexity of digital threats facing minors, and calls for the urgent adoption of several corrective measures aimed at addressing such threats.

These measures include the implementation of safety by design, aligning legal and regulatory frameworks governing online harms across the globe, and the application of public health approaches to violence prevention, amongst others.

Cover of the weprotect global threat assessment 2023

As a stakeholder in the online trust and safety community, Resolver, a Kroll business contributed insights and data points drawn from multiple sources including social media, gaming, and online communities on the deep and dark web featured in this year’s Global Threat Assessment report. We reiterated the call for action aimed at preventing the continued victimization of minors online.

Beyond a sharp spike in the volume of CSAM material reports to regulatory agencies across the globe – including an 87% increase in the number of relevant reports processed by the US National Center for Missing & Exploited Children (NCMEC) since 2019 – the adoption of generative technologies such as Large Language Models (LLM) and text-to-image services by predator communities have also enabled new forms of online abuse. The cross-platform trade of self-generated CSAM, the grooming and financial sexual extortion of minors on popular social media and gaming platforms, curation of content of interest to predators (COITP) and proliferation of AI-generated CSAM content are all of grave concern.

What are the key trends in CSAM in 2023?

1. Recognize Children from marginalized and economically underprivileged communities are at increased risk of online sexual harm

Increased internet access among minors over the previous few years has exposed minors to a wider range of risks, including sexual exploitation and abuse. According to the latest figures compiled by The Economist Impact, globally 54% of respondents aged between 18 and 20 had experienced at least one type of online sexual harm. The same study also found that respondents who self-identified as a minority group were more likely to have experienced online sexual abuse, with 65% of respondents categorized as LGBTQ+ reported having suffered from such harm.

2. Recognize Increase in ‘self-generated’ CSAM content

According to data collected by the Internet Watch Foundation (IWF), the number of webpages actioned by the organization featuring ‘self-generated’ sexual imagery increased from 27% to 78% between 2018 and 2022. Additionally, children aged 11-13 feature most in reports of such imagery, with girls in this age group representing 50% of all reports actioned by the IWF in 2022.

3. Recognize Minors at increased risk of grooming and financial sexual extortion on social media and gaming platforms

NCMEC received more than 10,000 reports of grooming and sexual extortion in 2022, compared to just 139 reports the previous year. That constitutes a 7200% increase in such crimes over the reported time frame. The growing severity prompted the US FBI to issue a public safety alert with many of such schemes reportedly orchestrated by offshore criminal syndicates. Online multiplayer games increased children’s risk of experiencing sexual harm at alarming speed. A Resolver investigation found that online predators are able to lock minors into high-risk grooming conversations in as little as 19 seconds after the first message, while 45 minutes represents the average time for grooming in such an online environment.

Statistics from the weprotect global threat assessment 2023

4. Recognize AI-generated CSAM represents new and fast-developing form of abuse

The past year has seen a growing number of cases in which online predators have misused popular generative-AI services to create CSAM for both personal and commercial use. In a five week period in 2023, the IWF investigated 29 reports of URLs containing suspected AI-generated CSAM, of which 7 were found to contain synthetic media. The same investigation also revealed that some of the offenders had posted this synthetic media on image-sharing platforms, while promoting links to other content depicting “real children” that is hosted on other platforms – some of which is hosted behind a paywall.

Conclusion

The 2023 Global Threat Assessment highlights the growing volume and diversity in online threats facing minors congregating in online spaces. Financial sexual extortion of minors on social media and gaming platforms, and the misuse of Generative AI to produce CSAM have grown in popularity over the previous year and are likely to pose a significant challenge to enforcement and regulatory bodies over the long term. The report also calls for a broader investment in public health approaches, centering children rights and perspectives in prioritizing prevention designed interventions, and global alignment on legislations governing online harms as three urgent corrective measures that can help address the risks posed by online predators on a global scale.

Table Of Contents
    STAY INFORMED