1. Insights
  2. Trust & Safety
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

Mastering content moderation: How brands can win the war against toxic comments

Posted August 22, 2018
7806Series of open dialogue boxes on computer screen

It’s been a busy year of troll-fighting for Twitch.

Like many companies in the gaming space, the live-streaming video platform that allows viewers to watch online gamers in action has been combatting unpalatable content. IRL, the new site category Twitch introduced in late 2016, has been a target for troublemakers who have used the platform to harass fellow gamers. But by updating its community guidelines, suspending users who don’t abide by the rules and even banning those who bully others outside of the platform, Twitch is hoping to tamp down the hate and make its users “proud of the Twitch community.”

Twitch is far from the only company shoring up its defenses in this fashion. On the web and behind the scenes, a battle is in full swing. On one side stand the brands, working to build online communities free of hate speech, violence and graphic content. On the other you’ll find an army of commenters hurling toxic posts.

For most brands, content moderation is the weapon of choice as it allows organizations to manage user comments and maintain the integrity of their platforms. The onus is on brands to moderate negative and offensive content in a way that allows them to create a safe customer experience (CX) for their users.

It’s a lofty endgame, but with the right policy, technology and human intelligence, brands can combat toxic user generated material.

The current state of toxic content

Most of us picture internet trolls as spiteful individuals who wreak havoc from their parents’ basements, but the fact is that internet users of all kinds post toxic content. Back in 2014, research firm YouGov reported that nearly 30 percent of Americans had made malicious online comments or “trolled” someone online.

More recently, online commenting platform Disqus partnered with Wired magazine to break malicious comments down by state. Overall, about a quarter of those who posted comments made at least one that was considered toxic. The study found that Vermont was the source of the most toxic comments (12.2 percent), followed by Iowa (10.3 percent) and Nevada (10.1 percent). The most toxic time of day was 3 a.m., when 11 percent of all comments are “mean.”

There is also the growing issue of fake content produced by spammers and bots. A study by Data Scientist, Jeff Kao, found that more than 1.3 million comments to the Federal Communications Commision (FCC) were faked between April to October 2017 alone.

Customizing a solution for your company

There’s no one-size-fits-all solution to content moderation. Rather, brands have to determine their sensitivity level, what they’re willing to accept and the actions they’re comfortable taking.

For Disqus, their approach to content moderation wasn’t always black and white. “We deliberated for months, trying to understand our point of view around moderation….and looking at free speech and people’s right to voice their opinions,” says Sam Holland, director of product with Disqus. “It’s very sensitive, but we feel you have to be clear on your policy, and always stand with that.”

In recent years, artificial intelligence (AI) has increased in popularity as a way to keep trolls at bay. Brands and social networks can harness AI’s ability to execute rules on restricted words, recognize questionable language and images, send out alerts and put comments into pending, delete or approve mode. For example, Google Perspective is a public application programming interface (API) that relies on natural language processing to flag and remove content that could be toxic.

Equally important, however, is the human element — a vital component of the content moderation process. When overseen by a human agent, AI stands to make much-needed improvement to online environments. Human agents can catch toxic content that technology might miss, report back on trends, escalate serious issues and make delicate judgement calls about suspending or banning users from an online community.

How content moderation builds consumer trust

When abusive content is delivered in association with a brand — whether it appears on Facebook or Twitter, in a gaming forum, within a blog post or in a product review — consumers can align that hateful language and ideology with the company.

“The way you deal with negative comments is a reflection of your brand values and brand personality,” says James Heaton, president of Tronvig Group, a Brooklyn-based brand strategy agency. The content moderation calls that a company makes, along with the sentiment and tone expressed in its response, are attributes of your organization, Heaton explains. “So if you haven’t thought that through, or you have incongruent ways of dealing with inappropriate content, problems can occur.”

Heaton’s advice for protecting your brand while retaining customer trust is to empower those who communicate with consumers on your behalf. “The solution is to first clarify what you stand for as a brand, then operationalize it and give autonomous authority to the people on the frontline to behave in accordance with your established principles,” he says.

By working with a CX partner that’s aligned with your brand’s principles, policies and culture, you can be confident in their ability to manage user-generated content on your behalf and avoid delays in response time. Delays that Heaton says “often allows the situation to expand, and perhaps fester.”

Measuring your content moderation success

Every content moderation action is a step toward creating a more positive and pleasant experience for your users. But how can you tell if your strategy is working?

An important part of the process of building brand trust is measuring key performance indicators (KPIs) to gauge the quality of your user content. Keep an eye on your pre-moderation — the volume of submitted comments that are successfully moderated before they appear online — to determine whether you’re effectively managing all submitted comments or need to add agents to your team. If you’re using a reactive approach, wherein agents review posts after they’re visible online, measure the number of comments that your team deems as offensive. If the volume is high, you may need additional protective barriers.

Toxic online comments aren’t going away, but effective content moderation can keep your company and your customers safe. It requires effort, but with the help of a strong customer experience team, your brand — and your online community — can come out victorious.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more