Time for Platforms to Act: Navigating Ofcom’s New UK Online Safety Regulations

· 3 minute read

In a landmark move to protect users from illegal and harmful online content, Ofcom has officially published the first set of codes of practice and guidance under the UK Online Safety Act (OSA), which came into force on December 16, 2024. This represents a significant step toward holding online platforms and services, with users in the UK, directly accountable for the safety and wellbeing of their users—especially children, women, and vulnerable individuals.

The new regulations require platforms, including social media, search engines, messaging apps, dating sites, and even file-sharing services, to take proactive measures to identify and address a wide range of illegal content such as terrorism, hate speech, fraud, child sexual abuse material (CSAM), and more.

What Do These New Regulations Mean for Tech Firms?

Under the UK Online Safety Act, platforms must:

  • Conduct illegal content risk assessments to understand the threats posed by harmful material on their platforms
  • Appoint senior accountability figures who will be responsible for compliance with safety standards
  • Improve moderation capabilities, ensuring content is reviewed quickly and efficiently
  • Enhance reporting mechanisms, making it easier for users to flag harmful content and receive timely responses

Platforms must also take specific action to protect children from sexual exploitation and grooming and ensure greater transparency in how they moderate content.

For companies in scope of the OSA, the clock is ticking. They have until March 2025 to complete risk assessments and begin implementing the required safety measures. By March 2025, they must have processes in place to mitigate illegal content risks and begin taking action to remove harmful material from their platforms to avoid punitive financial sanctions on their services for non-compliance.

What are the key actions required of tech companies under the UK Online Safety Act?

Risk assessment and compliance

 

 

 

Risk Assessment & Compliance Support : Platforms must conduct thorough risk assessments on their platforms to identify illegal content such as CSAM, fraud, hate speech, and terrorist content. The complexity and volume of online risks makes it difficult for platforms to stay compliant and accurately assess threats across diverse regions, languages, and user behaviours.

Moderation and reporting under osa

 

 

 

Enhanced Moderation and Reporting: Effective moderation is at the heart of the UK Online Safety Act. To meet these standards, platforms will need to understand the networks and actor behaviours on their site without sacrificing quality or speed of moderation.

Osa mandates platforms to protect minors to stay in compliance

 

 

 

Protecting Minors & Vulnerable Groups with Human Intelligence:  The protection of children and vulnerable groups online is at the heart of OSA regulations. However, fully automated tools often miss subtle signs of abuse, grooming, or other risky behaviours, leaving platforms exposed to potential harm.Protecting minorsrequires nuanced, empathetic approaches, particularly when it comes to detecting age-appropriate material or safeguarding vulnerable groups from exploitation.

Platforms must take swift action against illegal content under the new osa

 

 

 

Swift Response to Illegal Content:In today’s rapidly evolving digital landscape, tech companies must act quickly to address illegal content. Without real-time threat detection and alerting systems, platforms face significant risks and consequences, including legal penalties, reputational damage, and erosion of user trust.

Now Is the Time to Act

With Ofcom’s commitment to enforcing these new safety duties starting in March 2025, tech companies need to act now. The consequences of non-compliance can be severe, with potential fines of up to £18 million or 10% of global revenue, and in extreme cases, the UK government can block sites that fail to meet these safety standards.

For platforms in the tech sector, the UK Online Safety Act isn’t just a set of regulations to comply with—it’s a call to create safer, more secure online environments. Resolver Trust and Safety is here to guide you through this process and ensure your platform meets the new legal obligations while providing a safe, enjoyable experience for your users.

Why Resolver Trust and Safety?

Resolver Trust and Safety has been helping partners across the tech sector, including VLOPs, VLOSEs, AI Model Makers, NGOs, regulators and government bodies navigate the complexities of online Trust and Safety for 20 years. By combining AI-powered technology with the expertise from a diverse and cross-disciplinary team of subject matter experts operating across multiple verticals, regions, and languages, Resolver helps our partners rise to meet their obligations under the UK Online Safety Act.

Our highly-skilled in-house experts utilize a human-in-the-loop methodology combining bespoke AI algorithms that have been developed and trained over the last two decades to efficiently identify harmful content across the surface, deep and dark web. We apply behavioral intelligence using cutting-edge data science and human analysis to illuminate harmful behaviors and malicious networks engaging in complex harms such as drug dealing, grooming and abuse.

By integrating human intelligence into every stage of detection, we provide a more nuanced empathetic approach to protecting vulnerable groups, particularly minors, from harmful interactions online. Our real-time threat detection and alerting systems ensure swift responses to emerging risks, enhancing transparency and reporting mechanisms and minimizing legal, reputational and user trust issues.

For more information on how we can help you stay compliant with the OSA, contact us today. Together, we can build a safer digital world.

Table Of Contents

    Request a demo

    If you see this, leave it blank.