1. Insights
  2. Trust & Safety
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

More than a game: Why player communities need content moderation

Posted August 16, 2023
Illustration of a person sitting in front of a computer, wearing headphones and ready to offer player support

To borrow famous words from 1986's The Legend of Zelda, "It's dangerous to go alone!"

But while much has changed in the games industry and the world at large, the words ring true to this day. It's still dangerous to go alone, albeit for new, different reasons.

For many, gaming is a social activity. Before school, after work or between errands, players hop online and commune to raid dungeons, battle bosses and explore new worlds.

As genuine and positive as communities can be for players and games companies alike, noncompliant and offensive player behavior can have the opposite effect. Unchecked abuse can ruin the player experience and turn would-be fans away.

This puts the onus on games companies to develop and maintain effective content moderation strategies in order to uphold a welcoming environment for their players. Far from alone, players must be protected.

There's a lot on the line. According to a TELUS International survey, it takes only two or three exposures to inappropriate or inaccurate user-generated content (UGC) for 40% of respondents to lose trust in a brand. And an overwhelming majority (78% of respondents) say it is a brand's responsibility to create a positive and welcoming online user experience.

Let's take a closer look at the fundamental role of content moderation in gaming.

An illustration showing various people enacting content moderation on a website and mobile phone.

The future of content moderation: Strategies and tools for 2024 and beyond

Organizations have a responsibility to create and maintain digital experiences that are truthful, welcoming and safe. Learn how brands can keep pace with the growing levels of user-generated content (UGC) to build trust with their customers and protect their reputation.

Download the e-book

Using AI to protect gaming communities

With over three billion gamers worldwide today — and an expectation, according to Statista, that the number will continue to rise — moderation of player generated text, video and audio is no small task.

To handle the scale, many gaming companies are implementing automation and technology powered by artificial intelligence (AI) to establish a first line of defense against bad actors.

Approaches like these are relevant, no matter what form player content takes.

  • Images and video content: For visual content, image processing algorithms and computer vision can be used to detect and block violative pictures and videos, thereby limiting or outright preventing their reach within player communities, and even among human moderators.
  • Text content: Natural language processing (NLP) and sentiment analysis can pick up on rule-breaking content in the written word, proactively preventing the use of words or phrases that violate community guidelines or laws.
  • Audio content: In gaming, a great deal of player interaction is verbal. To keep players safe, AI-backed technology also holds great promise in audio content moderation.

Altogether, these systems form a critical layer in any games company's content moderation operations. But as the player population grows, and online discourse continues to evolve, it takes more than tech alone to maintain welcoming player communities.

A hybrid approach to content moderation in games

According to another TELUS International survey, more than half of respondents (51%) said they have seen "algospeak" on social media and in gaming communities.

Algospeak is the practice of using codewords, slang, deliberate typos and emojis meant to get past automated filters and AI. It's a topic that's as fascinating as it is elusive, a reality that made it an ideal topic for a recent episode of our podcast, Questions for now.

Keeping up with algospeak isn't easy, nor is it something that can be done by automation alone.

By combining automation with human game moderators — referred to as a human-in-the-loop approach — studios have a better chance of not just flagging toxic content, but also considering more nuanced context, intent and tone. It's a delicate balance, since human content moderators need to understand relevant community guidelines, regional laws and cultural contexts, while also protecting honest discourse.

When you think of the games industry, and of the popularity of live streamed esports for example, the merits of a hybrid approach become especially clear. Automation can be used to prevent or detect explicit or offensive content, and in certain circumstances, flag it to human moderators for further investigation or intervention. Side-by-side, technology and game moderators can deploy their best attributes in order to preserve a positive player experience.

Setting gamers up for success

What's often missing from the discussion around content moderation is that players, not just studios, have a role in maintaining a safe community. As online gaming grows, there are more and more brands empowering their players with the means to look out for one another.

A notable way this is achieved is through in-game reporting systems, enabling players to identify and flag content that evaded a game's moderation. When done right, moderators are alerted immediately, and act quickly, in order to protect players. And, as an added benefit, what gets flagged by players can become data that is analyzed, helping those in charge of content moderation operations to further hone their strategies in alignment with trends.

These tools offer an opportunity for gaming brands to reward good behavior, as well. However, incentivizing players to report harassment can be a complex issue. While rewards can encourage players to report abusive behavior, they can also lead to false reports or gaming the system to obtain those rewards. This suggests that, much like automation, providing tools for players to self-moderate is an important piece of the puzzle, but must work in harmony with complementary strategies.

Partners in a better player (and moderator) experience

In November 2022, Ubisoft and Riot Games announced they were joining forces for a new research project aimed at making online video game spaces safer. The project, Zero Harm in Comms, creates a shared database of anonymized data, which is used to train the game publishers' systems for detecting and mitigating disruptive behavior. What makes the collaborative project a key asset to the industry is the sheer volume of data the brands have access to and how that data can be used to develop a robust framework for the industry.

The Fair Play Alliance is tackling a similar goal, creating a forum for the games industry — from major publishers to small upstarts — to share best practices surrounding healthy gaming communities.

Partnerships like these are helping to create a safer space for gamers. So too, it must be said, are partnerships between games companies and outsourcing providers capable of maintaining trust and safety within the player experience.

Games companies are tasked with navigating an uncertain industry that nonetheless demands excellence. There's a lot to account for, and while many may seek to create thriving communities, few enter with knowledge of how to keep them safe. If you are looking to make serious content moderation upgrades that will make a difference in your overall experience, reach out to our team of industry experts.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more