The metaverse was top of mind at South by Southwest last week. Billions of dollars are pouring into this new virtual digital world, but that investment isn’t necessarily ensuring user safety. Thus, online safety in the metaverse is an enormous concern.
On a larger digital level, there will be a substantial economic opportunity as we incorporate more creative possibilities and events into Web 3.0 and the metaverse. How we interact, work, play, learn, make purchases, receive healthcare, and experience entertainment are all ripe for big changes. Bloomberg estimates that by 2024, the metaverse may be an $800 billion market.
This dramatic evolution of the web is not only driven by the development of more robust online experiences, it’s related to the sheer number of people using the internet. At the end of 1999, 484 million people were online, only about 4% of the world’s population. In March 2021, that number had grown to more than 5 billion people, or 65% of the global population.
That amazing increase in the number of users, plus an increase in products, services and experiences offered in a metaverse, brings significant opportunity and many additional risks. Based on this, we are already seeing government regulators and online safety organizations and global consumer brands watching the evolution of these risks very closely. Some are risks we are aware of today; others will be unique to these new digital domains.
Online safety policies vary by region and platform, and critical online security concerns include violent extremism, hate speech, child sexual exploitation, graphic violence, harassment, bullying and medical misinformation.
While these safety risks will, of course, still be present in Web 3.0, there are three unique aspects of the metaverse that will cause additional and serious safety concerns.
Moving from a digital-only to a ‘physically augmented’ world
In Web 2.0, most unwanted attention takes place through text-based content and photos. In 3.0, though, we’ll move throughout the digital world in ways that include graphics, 3D imagery, and auditory content, which create a more immersive experience. However, it also expands the risk of harm as our “virtual” personal space can be violated.
These interactions are already commonplace on gaming platforms, where bad actor groups can cause greater stress and harm through interactive conversations. Add in haptic technology, which uses tactile sensations to simulate the sense of touch, and assaults can be even more intrusive and stressful than a hate-filled text thread. Mary Anne Franks, president of the Cyber Civil Rights Initiative, says that research shows abuse in virtual reality is “far more traumatic than in other digital worlds.”
Metaverse platforms that focus on Web 3.0 security and safety from the start will likely gain more trust and, therefore, more traction with users. Consider the physical world, where you feel much more comfortable and secure walking down a well-lit street with cameras and an occasional police presence. Compare that to how uncomfortable it feels to walk down a dark alley with broken streetlights and walls lined with rubbish and shadows.
People don’t like spending time where they feel unsafe. That applies to not only the physical world but also to digital worlds such as the metaverse. People prefer to spend time where they know safety measures are in place. A proper and thorough Trust & Safety approach will have a positive impact on user growth and experience.
Centralized environments meet decentralized ones—with transient users
In Web 2.0, technology platforms centralize their environments by setting policies that govern users’ safety within their domain or online platform.
As we move into Web 3.0, three different models will emerge:
- Centralized models. Singular entities will run these, similar to how large social media companies currently manage an environment, set its rules and create boundaries.
- Multiverse models. Users will travel between digital worlds, each with unique owners and rules, much like how, in the real world, we move between countries by showing our passports. Users will maintain their identities as they move freely from one environment to another and keep control over their data as they “travel.”
- Decentralized models. There will always be environments in the digital world that are less regulated, just as there are Tor networks and the dark web today. Such decentralized Web 3.0 environments will allow a new level of exciting opportunities and creativity for users who can build and grow communities. However, it also means bad actor groups will have more potential to espouse views from the alt-right or promote child endangerment, misinformation and disinformation, violent extremism, radicalization, illegal activity, fraud, and new forms of scamming.
Maintaining online safety in multiverse and decentralized environments will be more complex. Blockchain, new commercial models, and different identity and “Know Your Customer” solutions will play a role, just as they do now with passports and identity cards. The metaverse will replicate the real world by offering events such as music concerts, shopping, and casinos. Ensuring these are safe will require many of the tools and patterns we have now, such as checking someone’s age to enter a casino, albeit through a digital lens. These solutions already exist and are becoming increasingly common.
New users: some experienced, many not
The sheer volume of activity possible in the metaverse, coupled with the number of new participants, can create a possibly damaging set of user experiences. Up to now, it has been more typical for experienced technology platforms to host online events; now any company in the physical world that wants to hold an online experience can do so—even if they don’t necessarily have the technology background or trust and safety (T&S) expertise.
Bad actor groups will continue evolving their tradecraft, and the emergence of Web 3.0 will provide them with new environments and methods. However, we have a unique opportunity right now to anticipate and proactively address their tactics in this new online world. Just as we continuously need to determine what “safe” looks like today in Web 2.0, we need to assess what being safe means going forward. It’s nowhere near enough to merely react to harms as they appear.
Monitoring and detecting online harms and attacks will become more complex, and teams will need to learn how to stay ahead of evolving threats and create appropriate policies to address them. To succeed in the metaverse, T&S teams must recognize the increasing speed at which changes are happening and learn how to best adapt to new risks. Historically, they have only had to contend with scale, language, and multimedia. Now, there is also physicality, although through virtual elements.
Meeting the trust and safety needs of the metaverse
As online harms continue to evolve and constantly change, the Resolver suite of Platform Trust & Safety solutions continues to tune and adjust. At Resolver, we know that Web 3.0 is both similar to and different from Web 2.0. How we interact, work, play, learn, make purchases, receive healthcare, and experience entertainment are all ripe for big changes. We continue to grow and evolve our safety solutions and AI technology to create a safe internet in all forms. We are changing as fast, if not faster, than Web 3.0 is evolving.
Just as in the dynamic real world, it’s critical to constantly assess safety. At Resolver, we embrace the fact that the online environment will always change and evolve, and we are committed to keeping pace as effectively as possible. By understanding how threats are changing, we know how to continually protect your platform’s integrity and your users’ safety.
Resolver is the most trusted provider of Platform Risk Intelligence for global Trust & Safety teams. Since 2005, our industry-leading online safety experts have trained our AI technology to identify risk signals embedded within the digital chatter of bad actor groups, so online platforms can anticipate what they’ll do next.
We’re a founding member of the Online Industry Safety Association (OSTIA) and WeProtect Global Alliance and provide vital research to its annual Global Threat Assessment. Our efforts contribute to more than two billion users’ safe, daily online experiences. That includes an estimated 450 million children. It’s why our customers are always first to know and first to act.