February 2022 marks the 19th annual Safer Internet Day, but this year feels different.
After years of successfully raising awareness for online safety and global discussions about how to improve it, 2022 will be a year of action for online platforms, governments and organizations committed to creating a safer internet.
Consider the following three trends:
- Acceleration: Online harms have dramatically increased during the pandemic.
- Regulation: More governments are proposing and passing online safety bills.
- Transformation: Safety Tech is now an established industry that is constantly innovating.
Although these three trends are converging to make the internet safer, questions remain. Will Trust & Safety teams be able to keep up with accelerating online harms? Which regulations will pass and require immediate attention from online platforms? And what changes will bad actor groups, who prey on the most vulnerable, make to their tradecraft in order to evade detection?
Here’s what we know today and what we can expect in 2022.
Acceleration—online harms show no signs of slowing
COVID-19 accelerated our online lives faster than anyone could have anticipated, speeding up the online revolution by perhaps as much as five years. Two years into the pandemic, we are accustomed to a much more dynamic online experience than ever before and exposed to a growing number of different online safety risks.
Top of mind are new online child safety concerns. Not only are more minors online, but there are also more offenders at home, unobserved, and spending more time seeking victims.
Resolver recently worked closely with WeProtect Global Alliance and the PA Consulting Group to provide data and perspective on this global and critical risk to children. We were troubled to find that the pandemic has allowed offender tactics to evolve, resulting in new and previously unseen forms of crimes.
Using first-party data, we found several types of growing risks to children. They include the tools offenders use to exploit children, masked language techniques, the role of cloud sharing, and an unfortunate retraumatization of survivors with offender-created fake profiles.
- Offenders continually seek new tools. Over half (56.8%) of all discussion on known offender dark web forums was about new tools that allow detection evasion and make offending more secure. (The only other two categories were social media platforms [32.8%] and direct messaging [10.4%]).
- Masked language hides harmful content in gaming. In gaming, veiled or hidden use of grooming or child sexual abuse material (CSAM) terms grew more than 13% from 2019 to 2020. That resulted in our discovering 50% more harmful content.
- Cloud sharing fuels interactions with harmful content. Resolver’s analysis showed that instances of user engagement or interactions with harmful content relating to CSEA exploded significantly from 5.5 million (Q1 2020) to nearly 20 million (Q1 2021).
- Offenders retraumatize survivors using fake profiles. Many online offender groups list known CSAM survivors and their online preferences. In Q1 2021, Resolver identified 3,324 unique pieces of posted content, each with as many as 2,000 interactions. This “network effect” illustrates the malignancy of that sharing, with each interaction perpetuating exploitation of the survivor.
Children are not the only ones at risk. There are also new risks being amplified by digital chatter, such as public health-related misinformation. This simply did not exist at scale before. Bad actor groups have taken advantage of the pandemic to spread health-related and other misinformation. As the number of bad actors around the globe increases, so too does the speed and scale of harmful content they spread.
During these first two turbulent pandemic years, COVID-19 forced the discussion of online safety forward, and it remains at the forefront for public and national policymakers. Regardless of the pandemic’s future, these risks will continue to permeate our digital world.
Regulation—more regulatory bills and policies are coming
While government agencies have discussed regulation for more than five years, we are now seeing conversations move from consultations into more tangible policy papers and draft bills. In 2021, governments, tech platforms, civil society, and, most importantly, users began planning for when, not if, these regulatory changes happen. They are discussing what the scope of the various regulations will cover and, critically, how they will be implemented.
Here are some prominent examples of government regulatory action in 2021:
- Australia’s Parliament passed its Online Safety Bill in June of 2021, making online service providers more accountable for the online safety of those using their services. It covers cyberbullying and image-based abuse and puts forth a strong set of definitions on harmful content.
- The UK Government’s Online Safety Bill, proposed in May 2021, is being widely discussed. Its specific mandates seek to make the UK the “safest place in the world to be online while defending free expression.”
- In the EU, the Digital Services Act and Digital Market Act put forth a single set of new rules applicable across the entire EU. They aim to create a safer digital space for users and a level playing field that boosts innovation, growth, and competitiveness by allowing smaller companies and startups to compete with the very large players.
- The U.S. has introduced initiatives to increase online safety and regulation of the internet. Many of these initiatives are starting at the state level, or revisit Section 230, such as the recent reintroduction of a bill designed to open the door for federal and state lawsuits against online companies that host child sexual exploitation content.
What seems like isolated, market-by-market actions will likely begin coalescing into a global precedent for online harms policies. Expect to see more of them in 2022, pressing online platforms to assess their Trust & Safety capabilities against a global map of requirements.
Transformation—new Safety Tech innovations are here
In 2021, the Safety Tech community gained global recognition as a formal sector in the UK, the U.S., and beyond. According to the UK government, the industry was one of the fastest-growing sectors in the UK tech industry last year, with revenues increasing 40% and creating 500 new jobs.
According to the cyber and advanced technology investor Paladin Capital Group, the U.S. now has a large and fast-growing Safety Tech industry as well. It includes 8,800 Safety Tech professionals spread across more than 160 dedicated online safety technology businesses. That’s up from fewer than 100 businesses just five years ago.
That certainly reflects the growing need to address online safety, which has also sparked rapid AI and machine learning innovation. However, digital transformation alone will not solve the online safety problem. Transformation requires global collaboration and continued, proactive intelligence gathering on the behaviors and tradecraft of bad actor groups.
No individual entity will win this battle on its own. Safety Tech is a new and evolving landscape, and now more than ever, we will need different stakeholders to work together in new ways. To be truly successful, we as a global community must align policy, operational, and AI teams to make robust and evidence-driven decisions and interventions.
We understand this at Resolver and are already working with our Trust & Safety customers on the transformation needed to be at the cutting edge of these new, safety-related technologies. Our experience across gaming and immersive worlds and ability to be effective against new risks such as medical misinformation, coupled with our truly global understanding of culture and context, will allow us to continue robustly supporting our customers through these growing challenges.
2022 is certainly going to be a year of action. Resolver looks forward to working further with our global partners across government, big tech, civil society, and NGOs to create a safer digital world for everyone.