Harmful content takes its toll on brands' social media teams

Resolver
· 3 minute read

Harmful content has always been a necessary evil when engaging directly with consumers. For most brands, it used to be like cosmic radiation from the sun; always there, spiking periodically, but not terribly hazardous at low doses. Then 2020 exploded, releasing Chernobyl-type levels of contamination that have decimated the growth strategies of social media managers.

The unrelenting torrent of hate speech, mind-numbing misinformation, pure rage, and raw negativity has also taken its toll on the mental health and well-being of social media managers around the globe. Already burned out over long hours, 24/7 demands, and high stress, more and more of them are exiting their roles as an act of self-preservation.

The escalation of online hate was a wake-up call for strategist Amy Brown, who became known for managing a sassy Twitter account at fast-food giant Wendy’s from 2012 to 2017. She said her career in social media has been a “front-row seat to the rise of the alt-right and QAnon” and like “watching a car crash in slow motion.”

“There’s only so much trauma one single human brain can process,” Brown told Digiday. “It takes a huge mental toll to consume this content, day in and day out.”

She commended social media platforms for recent measures to curtail the rise of disinformation and extremism—adding warning labels to content and removing harmful accounts—but the social media teams managing accounts on those sites are still stretched beyond human capacity.

The role of social media manager has become exceedingly difficult

Even prior to 2020, social media managers had faced challenges in content moderation, fraud and risk, and security, which intensified as participation on social media platforms soared. Social media users now number 4.14 billion—53% of the world’s population—with two million more people joining every day. 

According to Domo’s Data Never Sleeps 8.0 report, every minute of every day, Facebook users share 150,000 messages and upload 147,000 photos; WhatsApp users share 41,666,667 messages; Instagram users post 347,222 stories; Twitter gains 319 new users; YouTube users upload 500 hours of video; Reddit sees 479,452 people engage with content; TikTok is installed 2,704 times. As well, the average user spends a disturbing 15% of their waking hours on social media.

What’s more, the conversations being monitored are no longer just on indexed websites and open social media conversations. The Deep Web, where closed conversations occur, is reportedly 500 times larger than its Surface Web counterpart and it contains a number of different private groups, forums, and conversations, so social media managers are seeing their time further stretched to keep a manual eye on these bustling closed channels. 

Swamped with that tidal wave of content, social media managers have become critically important as their job has become exceedingly difficult. Once confined to managing and promoting their brand, they’re now pulled into online abuse related to systemic racism, police violence, political divisiveness, and pandemic restrictions—all while being responsible for maintaining a safe space for consumers to connect with their brand.

The reality of being in the social media trenches

According to a recent study by West Virginia University, social media managers rate their mental health at 6.35 out of 10 on an ordinary day. During a crisis, their mental health declines by nearly two points to 4.52 out of 10. During COVID-19, social media managers ranked their mental health at 4.63, and 88% of respondents said the pandemic has made their work much harder.

Months of working from home have blurred the boundaries between the personal and the professional, adding to the burden. How do you safely moderate content at home when it risks exposing family members to toxic or potentially graphic content?

Months of being in crisis mode have been exhausting. Social media professionals feel like they have to be everywhere at once and available 24/7—creating content, responding to and interacting with others. Many are expected to wear the hats of a graphic designer, visual editor, copywriter, strategist, community manager, and data analyst.

And when they’re not contributing to crisis communications, social team members are expected to be plugged into the 24-hour news cycle and “doomscrolling”—consuming a large quantity of negative online news at once—in order to do their jobs.

The cumulative effect has social media managers questioning how much more they can take. Without the support or resources to manage their mental health, some are leaning on each other to manage their burnout while others are asking leadership to help their social teams unplug.

A long-term solution for resilience and retention

Social media management is no longer relegated to the person with fingers on the keyword. It’s an enterprise-wide responsibility that needs to include communications, risk, legal and regulatory teams. If brands continue down the path of losing their social media teams to burnout, it also becomes the responsibility of HR and recruitment.

At the same time that the social media team is pulled into online abuse, they also have to deal with incidences that can travel largely undetected around the globe in less than 24 hours via digital conversations. And, when it reaches mainstream social media, organizations have less than 15 minutes to respond with a holding statement (let alone determine whether or not it’s a real issue). 

According to our own research and surveying of consumers, more than half expect a formal response to the situation in less than one hour, and 59% expect that response to come from the CEO.

Ultimately, this makes it increasingly difficult to discover, define, and defend against business-critical incidents and issues threatening the entire organization.

But there is an opportunity to evolve with the times by enabling new practices using a combination of technology and human expertise. This can ease the heavy, albeit vital, burden of social media moderation, while also protecting social media teams from the continued anguish of viewing harmful and hateful content. 

An early-warning risk intelligence solution combines artificial intelligence with human intelligence to quickly identify and remove distressing or disturbing content, enabling social media managers to focus on consumer engagement free from disruption by individuals or groups intent on doing harm to their brand.

Creating an environment where social media managers feel truly supported requires a long-term solution to protect them and your brand from harmful content, “hatejacking,” and orchestrated, anti-corporate activism that can have serious, negative repercussions.

Your social team deserves a working environment that supports their immediate mental health and wellbeing while your brand deserves protection from an early-warning advantage solution detecting damaging digital conversations 24/7.

Table Of Contents
    STAY INFORMED

    Request a demo

    I agree to receive promotional email messages from Resolver Inc about its products and services. I understand I can unsubscribe at any time.

    By submitting this form you agree to Resolver's Terms Of Service and Privacy Policy.