This is the seventh installment in our series: “20 Years in Online Safety: Reflecting, Evolving, and Adapting,” leading to the 20th anniversary of Resolver’s presence in the Trust & Safety community on Nov. 25, 2025. Originally launched as Crisp Thinking in 2005, we’ve had multiple generations of online safety professionals carry the mission forward. For this series, we asked our team, managers, and leadership to reflect on the journey we’ve taken so far and the road ahead, as we continue in our core mission to protect children online.
In an age where “frictionless” has become the gold standard of design, slowing users down can feel counterintuitive. Convenience and speed are prized; hesitation is seen as a flaw. But in online safety, friction isn’t failure — it’s protection. It’s the small, intentional moments that give users pause to reflect, and make safer choices.
As Resolver celebrates two decades of protecting children and shaping safer online spaces, we’ve learned that the most powerful safety interventions don’t always happen in detection systems or moderation queues. Sometimes, they happen in the millisecond between a thought and an action, the moment friction asks someone to stop and think.
In our work with global platforms, we’ve seen how quickly a friendly exchange can turn manipulative. Grooming rarely begins with overt harm; it starts with pace, trust, and uninterrupted access. For Trust and Safety leaders, that’s a design problem as much as a behavioral one. Every second of pause becomes a safeguard — a chance to interrupt escalation before your detection tools ever need to activate.
Why friction matters
Online grooming and other forms of harm thrive in environments built for instant connection — where messages flow freely, boundaries blur, and manipulation moves faster than detection. Analysts reviewing real cases describe the same pattern: rapid message pacing, emotional anchoring, and sudden shifts to private or encrypted spaces. Grooming depends on uninterrupted environments because speed silences reflection. Perpetrators exploit frictionless systems to build trust before safety tools or teams can intervene.
We soon realized that blocking content alone couldn’t stop harm from spreading. Grooming and exploitation evolved faster than static rules could adapt. This marked a crucial turning point: shifting from defense to proactive investigation, where we began tracing behavior, not just words or content. Identifying grooming attempts, off-platform migration, and early risk signals became the blueprint for the proactive, predictive intelligence we deliver to partners today.
Introducing friction — a prompt, a pause, or a permission check – interrupts that rhythm. It breaks the cognitive flow and creates a protective moment, a space for reflection. A user who is asked “are you sure you want to share this?” or “do you know this person?” has a chance to reconsider. A predator whose message is delayed loses the sense of control that grooming depends on.
When used thoughtfully, friction doesn’t frustrate users — it empowers them. It restores agency and awareness in a space where speed can so easily strip them away. For Trust and Safety leaders, the takeaway is clear: building in the right kind of friction isn’t a UX compromise — it’s a safeguard built into the fabric of platform design.
The psychology of a pause
Offline, we navigate behavior through subtle signals — tone, timing, and body language. A raised eyebrow, a pause in conversation, or a look of discomfort prompts reflection. Online, those cues disappear. Without them, impulsivity increases, and empathy fades. Behavior becomes detached from consequence.
Behavioral science describes this as the Online Disinhibition Effect — the way digital distance lowers inhibition and heightens risk-taking. When social feedback disappears, users feel less accountable for what they say or share. The result is faster escalation, reduced empathy, and greater vulnerability to manipulation.
Thoughtful friction reintroduces those missing signals. Nudges that ask a user to pause, or short message delays, act as artificial versions of that raised eyebrow — subtle cues that trigger the same cognitive shift from reaction to reflection. Even a small prompt like, “Would you like to rethink this post?” helps the brain move from impulse to self-awareness.
For young users especially, those pauses can be protective. They restore agency in moments of emotional or social pressure — giving a child time to recognize manipulation and choose to disengage before harm escalates. For Trust & Safety analysts, those micro-pauses also buy something critical: time. Time to detect risk patterns, flag grooming behaviors, and disrupt harm before it spreads.
Well-placed friction doesn’t limit connection — it rehumanizes it. It gives users back the pause that fast-moving design often takes away, and turns milliseconds into moments of safety.
Behind the design: Why friction works
Online friction is grounded in behavioral science. Research into cognitive processing and emotional regulation shows that micro-pauses interrupt impulsive decisions and restore reflective control.
Three key behavioral models illustrate why:
|
Lawless Space Theory explains how digital environments erode shared norms of accountability. When tone, empathy, and timing cues disappear, users act without the social feedback loops that regulate behavior offline. |
|
|
The General Aggression Model (GAM) shows that even short delays between impulse and action disrupt emotional arousal cycles and re-engage empathy and self-regulation |
|
|
The Behavioral Change Support System (BCSS) framework demonstrates how prompts and nudges can recreate the missing feedback loops of in-person interaction — turning interface cues into subtle reminders of presence and consequence. |
From a broader perspective, research into brain-friendly design reinforces why friction matters. Studies of neurodevelopmental conditions, such as ADHD and autism, show that difficulties often arise not from the conditions themselves, but from environments lacking supportive design. This increases vulnerability to both exploitation and reactive harm. These features are not isolated to clinical groups; they exist along a spectrum that affects all users, especially adolescents navigating rapid emotional and social development.
Friction points only work when they target known perpetrator pathways or coincide with high-risk decision moments. When applied well, a two-second delay or a simple reflective prompt can have an outsized impact. It’s not about slowing users down — it’s about giving the brain the space it needs to choose safely. For Trust & Safety teams, that cognitive pause is measurable prevention — the moment where design and behavioral science intersect to stop harm before it begins.
The OSA: When regulation meets design
The UK’s Online Safety Act (2023) formalizes what many in the safety community have long known: platform design choices influence user behavior. By requiring structured risk assessments and proactive mitigation, the OSA implicitly endorses behavioral safety mechanisms, like friction, as part of responsible design. Features like age-verification gates, content-sharing prompts, and controlled connection speeds are no longer seen as UX obstacles — they’re part of a platform’s duty of care.
However, the Act stops short of prescribing how friction should be implemented. This is where design integrity becomes critical. Used superficially, friction can devolve into a tick-the-box exercise — a meaningless speed bump that users quickly learn to ignore. Used intelligently, it becomes a context-aware safety control, targeting high-risk interactions, integrating with moderation workflows, and adapting as abuse tactics evolve.
Friction applied with intent doesn’t just slow users down — it proves due diligence. It shows that safety has been designed into the platform’s core architecture, not bolted on after harm occurs.
Resolver’s approach to friction
Online safety is both a technical and behavioral challenge. At Resolver, we’ve learned that friction must be designed, not imposed. It requires behavioral science, domain expertise, and intelligent systems working in sync to anticipate harm — not just respond to it.
Our behavioral, data, and engineering teams collaborate daily to refine interventions in live environments. Over two decades, that approach has shaped three core principles:
Behavior-informed intervention designOur analysts and subject matter experts study the behavioral patterns of both victims and perpetrators. Their insights shape every rule, flag, and prompt within our systems — ensuring friction doesn’t just interrupt, it educates and redirects. When a young user hesitates before sharing personal information, that moment reflects hours of behavioral analysis embedded in a single, well-timed design decision. |
|
Dynamic, data-driven enforcementOur cross-functional teams ensure interventions adapt in real time. Instead of static bans or blanket restrictions, we apply graded, context-aware responses — from temporary cooling-off periods to progressive limitations — that reflect intent, risk level, and user context. This model preserves community trust while neutralizing emerging risks early. |
|
Ethical collaboration with platformsResolver works directly with partners’ product, policy, and trust teams to embed safety into product DNA from the first design sprint. We don’t just hand over technology; we co-create policies, test interventions in live settings, and calibrate thresholds until friction feels invisible to good actors but impenetrable to bad ones. |
These pillars unite human insight with adaptive automation. They allow our systems to intervene before harm escalates — making friction not a barrier to engagement, but a blueprint for prevention. This framework is also the foundation for Resolver’s predictive online safety model — identifying early behavioral signals and intervening before they become harm.
The future of safer design
The next era of online safety won’t be defined solely by faster algorithms or smarter moderation, it will be shaped by how we design behavior — creating systems that anticipate risk before it occurs. Every pause, every prompt, every productive delay becomes a safeguard woven into user experience, with the potential to prevent harm before it begins.
Behavioral design transforms online safety from a reactive function into a proactive discipline. It gives users — especially children — the time, space, and context to make safer choices in environments built for speed.
Looking ahead: Design as prevention
Two decades in, one lesson endures: prevention is about timing as much as technology. It depends on empathy, evidence, and design that serves the human, not just the user. By combining technical innovation with psychological insight, Resolver has helped clients build digital ecosystems that are not only safer, but also more resilient, inclusive, and responsive to emerging threats.
As digital behavior evolves, so must our safeguards. The next frontier of Trust & Safety will be defined by predictive, intelligent prevention — systems that detect risk signals before harm escalates and protect the humans behind every screen.
A new standard in proactive CSAM elimination
As we reflect on two decades of protecting children and safeguarding online spaces we’re also looking ahead to the next frontier of Trust & Safety: the proactive, intelligent elimination of CSAM. Resolver’s new Unknown CSAM Detection Service represents the culmination of 20 years of learning, evolving, and purpose. It’s built to identify, prevent, and remove child sexual abuse material at speed and scale, while protecting humans behind the screen.
Learn more about how we’re redefining child safety for the next generation.
More from Trust & Safety’s 20th Anniversary series:
- Two Decades of Protection: Resolver’s Constant Evolution in Online Child Safety
- What We Call Threats: Evolving Taxonomies and the Role of Regulation
- From “Chicken Soup” to Catastrophe: The Dangers of an English-Only Trust & Safety Model
- The Human at the Heart of the Machine: A 20-Year Lesson in Online Safety
- From Reactive to Predictive: Why It’s No Longer Enough to Spot What’s Already Happened
- Wearing Many Hats: The Power of Generalists and Specialists in Online Safety