Enhancing online safety: Typology of Harms explained

Resolver
· 5 minute read

As part of Resolver’s work as a responsible stakeholder in the online trust and safety community, Adam Hildreth, Founder of Crisp and Australia’s e-safety commissioner Julie Inman-Grant acted as co-chairs of one of the World Economic Forum (WEF) Global Coalition of Digital Safety work stream constituted of senior representatives from industry, governments, civil society and academia to compile the Online Harms Typology. This typology will serve as a foundation for facilitating multi stakeholder discussions and cross-jurisdictional dialogues building upon a common terminology and shared understanding of online safety.

What is the Typology of Online Harms?

Recognizing the complex nature of online safety, the typology builds upon prior research into risk classification enshrined in the EU CO:RE 4 C’s that classifies threats into content, contact, and conduct risks. In particular, such a theoretical framework is based on the understanding that online harms can occur throughout the production, distribution and consumption of content (content) but can also arise due to online interactions with others (contact) and behavior facilitated by technology (conduct).

How does it help?

By framing online harms through a human rights lens, this typology also emphasizes the impacts on individual users and aims to provide a broad categorization of harms to support global policy development. It can aid governments by establishing a shared language to identify online harms and facilitate the efforts of civil society organizations seeking to participate in multi stakeholder discussions that advocate for a safer online ecosystem. The typology can also help companies, including those at the early stage, by providing insight into the range of online harms that might occur to users of their services – as well as their impacts on victims.

Typology Overview

Topology overview

 

Topic1

Threats to personal and community safety

Content risks

  • Child sexual abuse material (CSAM): Any representation by whatever means of a child engaged in real or simulated explicit sexual activities or any representation of the sexual parts of a child for primarily sexual purposes.
  • Child sexual exploitation material (CSEM): Content that sexualizes and is exploitative of the child, whether or not it shows the child’s sexual abuse
  • Pro-terror material: Material that advocates engaging in a terrorist act because it counsels, promotes, encourages or urges engaging in a terrorist act, provides instruction on engaging in a terrorist act or directly praises engaging in a terrorist act in circumstances where there is a substantial risk that such praise might have the effect of leading a person to engage in a terrorist act.
  • Content that praises, promotes, glorifies or supports extremist organizations or individuals: This includes content that encourages participation in, or intends to recruit individuals to, violent extremist organizations – including terrorist organizations, organized hate groups, criminal organizations and other non-state armed groups that target civilians.
  • Violent graphic content: Content that promotes, incites, provides instruction in or depicts acts including murder, attempted murder, torture, rape and kidnapping of another person using violence or the threat of violence.
  • Content that incites, promotes or facilitates violence: Includes content that contains direct and indirect threats of violence and intimidation.
  • Content that promotes, incites or instructs in dangerous physical behavior: Content that promotes, incites or provides instruction in activities conducted in a nonprofessional context that may lead to serious injury or death for the user or members of the public.

Contact risks

  • Grooming for sexual abuse: When someone uses the internet to deliberately establish an emotional connection with a young person to lower their inhibitions, and make it easier to have sexual contact with them. It may involve an adult posing as a child in an internet application to befriend a child and encourage them to behave sexually online or to meet in person.
  • Recruitment and radicalization: Includes posting or engaging with individuals with the purpose of recruiting individuals to a designated or dangerous organization

Conduct risks

  • Technology-facilitated abuse (TFA): Using digital technology to enable, assist or amplify abuse or coercive control of a person or group of people.
  • Technology-facilitated gender-based violence: A subset of technology-facilitated abuse that captures any act that is committed, assisted, aggravated or amplified by the use of information communication technologies or other digital tools, resulting in or likely to result in physical, sexual, psychological, social, political or economic harm or other infringements of rights and freedoms on the basis of gender characteristics.

Content, contact, conduct and risks

  • Child sexual exploitation and abuse (CSEA): Can refer to content (e.g. CSAM), contact (e.g. grooming) and conduct (e.g. live streaming).

Topic2

Harm to health and well-being

Content risks

  • Material that promotes suicide, self-harm and disordered eating: Content that promotes suicidal or selfinjurious behavior. Includes content that promotes, encourages, coordinates or provides instructions on: – Suicide – Self-injury, including depictions of graphic self-injury imagery. Eating disorders, including expressing desire for an eating disorder, sharing tips or coaching on disordered eating, or encouraging participation in unhealthy body measurement challenges.
  • Developmentally inappropriate content: Includes children’s access to pornography, particularly of a violent or extreme nature, and graphic, violent material.

 

Topic3

Hate and discrimination

Online hate and discrimination can negatively affect a person’s mental health, general well-being and online engagement. It can also, in the most extreme cases, lead to harassment and violence offline.

Content risks

  • Hate speech: Any kind of communication in speech, writing or behaviour that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of their inherent/ protected characteristics – in other words, based on their religion, ethnicity, nationality, race, color, ancestry, gender or other identity factor. Includes dehumanisation, which targets individuals or groups by calling them subhuman, comparing them to animals, insects, pests, disease or any other non-human entity.
Conduct risks
  • Algorithmic discrimination: A decision that results in the denial of financial and lending services, housing, insurance, education enrolment, criminal justice, employment opportunities, healthcare services or access to basic necessities, such as food and water.

 

Topic4

Violation of dignity

Conduct risks

  • Online bullying and harassment: The use of technology to bully someone – to deliberately engage in hostile behavior to hurt them socially, emotionally, psychologically or even physically. This can include abusive texts and emails; hurtful messages, images or videos; excluding others; spreading damaging gossip and chat; or creating fake accounts to trick or humiliate someone.

Contact risks

  • Sexual extortion: the blackmailing of a person with the help of self-generated images of that person in order to extort sexual favors, money or other benefits from them under the threat of sharing the material beyond the consent of the depicted person (e.g. posting images on social media).

Topic5

Invasion of privacy

Conduct risks

  • Doxxing: The intentional online exposure of an individual’s identity, personal details or sensitive information without their consent and with the intention of placing them at risk of harm.
  • Image-based abuse: Sharing, or threatening to share, an intimate image or video without the consent of the person shown. An “intimate image/video” is one that, where there is a reasonable expectation of privacy, shows nudity, sexual poses, private activity such as showering or someone without the religious or cultural clothing they would normally wear in public.

Topic6

Deception and manipulation

Content risks

  • Disinformation and misinformation: Misinformation involves the dissemination of incorrect facts, where individuals may unknowingly share or believe false information without the intent to mislead. Disinformation involves the deliberate and intentional spread of false information with the aim of misleading others. Includes gendered disinformation that specifically targets women political leaders, journalists and other public figures, employing deceptive or inaccurate information and images to perpetuate stereotypes and misogyny.
  • Deceptive synthetic media: Content that has been generated or manipulated via algorithmic processes (such as artificial intelligence or machine learning) to appear as though based on reality, when it is, in fact, artificial and seeks to harm a particular person or group of people. Includes deep fakes, which are realistic – although fake – images, audio or video clips that show a real person doing or saying something that they did not actually do or say.

Conduct risks

  • Impersonation: Posing as an existing person, group or organization in a confusing or deceptive manner.
  • Scams: Dishonest schemes that seek to manipulate and take advantage of people to gain benefits such as money or access to personal details.
  • Phishing: The sending of fraudulent messages, pretending to be from organizations or people the receiver trusts, to try and steal details such as online banking logins, credit card details and passwords from the receiver.
  • Catfishing: The use of social media to create a false identity, usually to defraud or scam someone. People who catfish often make up fake backgrounds, jobs or friends to appear as another person. Using this fake identity, they may trick someone into believing they are in an online romance before asking the person to send money, gifts or nude images.

Moving forward

By understanding different types of online harm, stakeholders can work collaboratively to develop effective policies, interventions and innovations that promote a safer digital ecosystem while respecting human rights and fostering positive online behaviors. Resolver looks forward to co-chairing the next phase of this essential work where our research will focus on methods to identify new and evolving online harms and the most effective response methods to mitigate these threats.

Table Of Contents

    Speak to an expert