The UK Government just took another meaningful step toward addressing online abuse, drafting a Bill designed to deliver comprehensive legislation across a range of online harms and putting digital platforms on notice, especially social media sites.
Since 2005 Resolver has worked alongside the UK Government to make sure legislation keeps up with changes in the online environment, originally collaborating on the country’s first online child protection laws and more recently advising on the real-world challenges of implementing the recommendations within the Online Harms White Paper. In particular, Resolver has advised that legislation should be strongly biased toward the bad actors that cause the harms – not solely on the destinations for the content they post.
During that period we’ve witnessed a dramatic rise in online activity among bad-actor, agenda-driven, activists and interest groups. At a time when platforms are being called upon to double their online safety efforts, these organized groups are exploiting them to coordinate and execute their harmful narratives or worse, putting those most vulnerable in harm’s way.
Trust & Safety teams are constantly challenged to identify and pacify the individuals and groups who manipulate and exploit their platforms. The global nature of these actors, and their constantly evolving tradecraft, creates a difficult environment for every social media, video sharing, gaming, dating, review and specialty platform.
It also has placed platforms’ reputations under a public microscope. The Online Safety Bill represents another milestone in the Government’s fight to keep UK users safe from online harms. It also represents a further increase in the, already significant, Trust & Safety responsibilities of social platforms.
What’s expected of platforms
The UK Government’s Online Safety Bill is the culmination of two years’ work in response to the issues first raised in the April 2019 Online Harm White Paper. According to the Bill, platforms would be required to:
- Consider the risks their sites may pose to the youngest and most vulnerable people, and act to protect children from inappropriate content and harmful activity.
- Take action to tackle illegal abuse, including swift and effective action against hate crimes, harassment and threats directed at individuals.
- Act on content that is lawful, but still harmful, such as abuse falling below the threshold of a criminal offense, encouragement of self-harm and mis/disinformation.
- Take responsibility for tackling fraudulent user-generated content, such as romance scams and fake investment opportunities posted by users.
- The final legislation, when introduced to Parliament, will reportedly contain provisions that require companies to also report child sexual exploitation and abuse (CSEA) content identified on their services.
The Bill also assigns a financial consequence should platforms fail to fulfill their duty of care to the tune of 10% of turnover (annual revenue) or 18 million pounds ($25 million), whichever is greater. Ofcom, which is the UK’s communications regulator, would also have the power to block access to sites if they don’t comply with the new legislation.
The draft Bill goes on to reserve powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with its requests for information. That option could go into effect should platforms fail to comply, with a review slated two years after these new regulations are put in place.
A difficult duty of care
In addition to tackling a gamut of online harms, the Bill equally calls for putting safeguards in place around democratic and journalistic content that ensure freedom of expression is maintained while a platform fulfills its duty of care. This brings to bear a growing challenge facing today’s Trust & Safety teams: How to keep users safe from harm while also protecting their freedom of speech.
Social media platforms in particular are ground zero for societal issues, including polarizing conversations about vaccinations, social justice, values and politics. While illegal content is a clearly defined online harm, many platform conversations require additional background on the issues and individuals involved to discern between harmful and unpleasant content.
In other words, language is complicated. The UK Government’s announcement recognized this challenge, going as far as to say safeguards “might include having human moderators take decisions in complex cases where context is important.” And conversely, measures to automate the removal of harmful content could backfire, such as “AI moderation technologies falsely flagging innocuous content as harmful.”
But our experience has shown that the solution is understanding the context and individuals behind the harmful or hateful content, as evidenced by recent online abuse toward footballers where various groups outside of the sport exploited its huge media profile as a vehicle to spread harm and hate using social media platforms.
Bad actors evolve and so do their strategies for inflicting harm on others. It is critical for platforms to have the latest intelligence on their tactics and tradecraft.
A window of opportunity
With the stakes higher than ever for getting this right, now is the time for Trust & Safety teams to revisit how they’re addressing online harms as this new legislation moves through review with a high likelihood of approval.
By working with an experienced risk intelligence partner who understands what actors and groups originate and amplify the wide spectrum of online harms, platforms can gain an early-warning advantage over illegal and harmful content. They can remain compliant with the requirements outlined in the new UK Online Safety Bill. And, more importantly, this can keep users safe.
At Resolver, our 15 years of experience has led to the creation of our Risk Intelligence Graph which allows us to identify organized groups and the actors within them that create and share hateful and harmful content online. These groups are using high-profile issues such as Covid-19 and racism against footballers to spread disinformation, but by identifying the individuals within these groups, we can help platforms be the first to act and prevent online harm from ever happening.
As a result, our Platform Trust & Safety solution has contributed to the safe, daily online experiences of more than two billion users, covering an estimated 450 million children. To find out how Resolver can help your platform, contact us today.