Why Moderation, Matters: The Impact of Toxic Content on Brands and Advertisers

Doomscrolling illustration

By Matthieu Boutard, President and co-founder, Bodyguard.ai

Amongst the plethora of changes at Twitter, one of the most concerning is the potential impact on moderation and freedom of expression on the platform. Although Elon Musk has stated his commitment to moderation remains “absolutely unchanged”, many high-profile users have gone public with their doubts. And many controversial banned accounts have re-emerged with new vigour. As a result, a number of big advertisers including Volkswagen, Carlsberg and United Airlines have paused their advertising while they await the fallout.

But why does moderation matter? Is it as simple as a choice between free speech and a “hellscape” with nothing in between, or something more complex?

Last week, we published our inaugural whitepaper examining online toxicity aimed at businesses and brands. We analysed over 170 million pieces of content, across 1,200 brand channels, in six languages, over one year. In doing so, we identified almost nine million comments we would define as toxic or harmful, including racism, misogyny and moral harassment.

Each and every comment represents an attack on a brand and the real people working there on the frontline of social media, customer services or technical support. The mental health impact on moderators cannot be overlooked. Burnout from toxicity is costly to brands and businesses in terms of employee turnover. And it can impact your customers too – 40% of people would leave a platform on their first encounter with toxic content.

Advertisers need to tread a fine line between moderating and protecting their brand, teams and customers and what might be seen as censorship. Social media, and other platforms, can be a useful conduit for customer feedback – good and bad, holding brands to account. Those that summarily remove all negative comments face considerable backlash from consumers. This was the case when Nestle was faced with a Greenpeace campaign highlighting its controversial use of palm oil in its products. By seeming to duck the issue and censor all negative comments, it gave the campaign an additional momentum. There is a danger that naysayers will simply migrate to another channel, outside the brand, where toxic brand comments can’t be as closely monitored.

The lifecycle of online comments, particularly on social media, is also a complicating factor. The average tweet, for example, has a lifespan of just 18 minutes. So, unless there is instant moderation in place a toxic comment can do instant and irreversible damage. However, a trained human moderator takes around ten seconds to analyse and moderate a single comment (meaning our whitepaper would have taken around 54 years to produce manually). When confronted with hundreds, if not thousands of comments every minute, it’s impossible for teams to keep up.

Automated solutions perform little better. Current machine-learning algorithms on social networks have an error rate varying from 20 to 40% , meaning only 62.5% of hateful content is removed. At Bodyguard, by combining intelligent machine learning with a human team comprising linguists, quality controllers and programmers, we provide a real-time solution that detects, moderates and, if necessary, removes up to 90% of toxic and hateful comments immediately. Our solution can also define context and culture, recognising the difference between friends interacting with “colourful” language, versus hostile comments directed at a brand or its representative. This nuance is what sets us apart and has attracted clients across sports, media and brands to our company.

As customers increasingly choose to transact and interact with brands online, building the trust of communities is vital. Toxicity cannot be allowed to pollute communications channels, but there also needs to be room for differing points of view, criticism and feedback. Collectively, brands and business leaders need to be proactive in their approach to make the internet a safer, more inclusive place for all.

GET IN TOUCH
Got a Question? We’ve Got Answers.