1. Insights
  2. Trust & Safety
  3. Article
  • Share on Twitter
  • Share on Linkedin
  • Share on Facebook
  • Share via email

What's next for content moderation in 2023 and beyond?

Posted January 5, 2023
Illustration of a robot and human working together, symbolizing the future of content moderation

As the digital world continues to evolve and expand, so will the way companies approach the monumental task of reviewing the content that users share online in order to create a safer, more welcoming environment for all.

To better understand various aspects of the content moderation landscape, we've explored topics such as the evolving nature of digital content, the increasing sophistication of AI in content moderation, wellness strategies for content moderators and the current regulations and legislation globally.

To kick off 2023, we checked in with some experts in the field to share their thoughts on what's next for content moderation.

The metaverse is here

Keeping the internet user-friendly is no small feat, especially as consumers begin to spend more and more time in this virtual world. With 5G subscriptions expected to jump from 12 million to more than four billion between now and 2027, the metaverse — which can't work without the fifth generation mobile network — represents the next evolution of the internet. In fact, TELUS International recently conducted a survey on metaverse customer experiences, and 72% of respondents indicated they believe that brand interactions in the metaverse will one day replace those in the real-world, or that brands will use a hybrid approach of both metaverse and in-person interactions. The survey also revealed that a majority of consumers (65%) believe the metaverse will be considered mainstream in the next five years.

Moreover, the switch from 2D to 3D AI-based content moderation will require new policies, along with a clear definition of banned behavior, and many brands are in the midst of determining how best to shape these new worlds. Kavya Pearlman, founder and CEO of XR Safety Initiative (XRSI), says that moderating the metaverse is "like trying to moderate reality." Between the massive convergence of technologies and more consumers capitalizing on the opportunity to push the boundaries in the virtual world, companies may struggle to keep up with the content that's created there.

Human moderators are still needed

Implementing community policies to prevent users from cyberbullying and harassing others is a must. From cyber bouncers at metaverse events like concerts and esports tournaments, to algorithms that can detect whether an account has been compromised or manipulated by bad actors, brands will need to keep a close watch on this new virtual world. To do so effectively, human moderators will likely continue to play a big role in moderating online content and behaviors, working hand-in-hand with AI.

According to Robert Zafft, business ethics, compliance and governance expert and author of The Right Way to Win: Making Business Ethics Work in the Real World, human behavior has a "why," whereas AI does not. "Cultures provide people with that 'why,'" he explains. "They provide people with information about what to do and what not to do. But, AI is bound by its algorithms and will never understand why it's doing what it's doing." That may mean it occasionally does things that are "contrary to the goals of the system." This is where human oversight, and having a content moderator monitor AI's actions, is vital.

"Content that may be considered culturally insensitive might be difficult for current systems to deal with because it's more subtle," adds Nigel Duffy, AI entrepreneur and global AI leader at Ernst & Young. "In addition to context, the challenge lies in the sheer volume of content you have to filter while balancing how conservative you want to be." In other words, brands need to ensure their content moderation policies are reflective of their corporate values and brand guidelines as well as their audience's needs.

In the future, there may be scenarios in which content moderators become non-player characters (NPCs) as bots, or create an avatar-based "digital twin" to patrol the virtual world. Their presence in the metaverse would enable them to prevent low-level cyber misdemeanors and flag any user violations in real time.

Improving CX in the metaverse

In the virtual world, users can adopt different personas and travel to unfamiliar places, all without leaving their homes. There may be opportunities to improve CX as well. "There are so many different ways for brands to have a presence in the metaverse," says Eric Goldman — associate dean for research, professor of law and co-director of the High Tech Law Institute at Santa Clara University School of Law. Options range from branded virtual facilities to brand reps that roam the virtual world interacting with customers. "This is something that I expect many (companies) are going to do," Goldman says, but he notes that this approach takes work. If brands want to engage with customers in 3D, they need to be prepared for possibilities like false or negative user-generated content (UGC) and they must take the necessary measures to keep it in check.

Companies should consider the privacy implications of these types of interactions as well. As noted by Fintech Magazine, the 3D world has the potential to provide companies with "deeper capabilities to monitor how, when and where users spend their time." But this, in turn, will require "an overhaul of current regulations, such as the EU's General Data Protection Regulation (GDPR)," to ensure that user privacy remains protected.

Protecting human moderators in the virtual world

As digital first responders on the frontlines of the internet, content moderators may come across digital material that is disturbing or offensive. Just as human-supported content moderation remains critical, so is ensuring these individuals have access to holistic wellness programs and resources to enable them to protect and maintain their health and well-being.

Companies must invest in the mental, physical and emotional health of their content moderation teams. Beyond equipping them with easily accessible resources and programs, brands must also ensure they fully integrate their team members' well-being into daily activities and conversations to proactively monitor for issues while keeping them motivated, supported and engaged.

Content moderation best practices: Maintaining brand trust in a digital world

Learn the importance of content moderation, how to choose the right method and the various ways to measure your success.

Download the report

What's next?

The next few years will present some big challenges in the CX space as companies work to understand the future of digital technology in relation to its impact on how they interact with consumers. There will always be a place for humans in AI-powered content moderation — their ability to identify content violations in the context of culture, current events and brand regulations is unparalleled — but there's more to consider as well. Companies must also remain hyper-focused on the evolving changes to the industry's regulations and legislations globally.

While it's still unclear how policies will ultimately evolve, issues surrounding social media liability, freedom of speech and censorship will continue to be considered by lawmakers as they determine which approach(es) is best to protect society as a whole.

In some ways, the future of content moderation is already here. But as it evolves alongside new digital worlds like the metaverse, keeping the safety and well-being both of users and content moderators top of mind will help organizations better navigate the changes to come.

This article is part of a five-part series on content moderation. Check out our other posts on the evolving nature of digital content, the increasing sophistication of AI, wellness for content moderators, and content moderation legislation and regulations.

Be the first to know

Get curated content delivered right to your inbox. No more searching. No more scrolling.

Subscribe now