1. Insights
  2. Trust & Safety
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

The evolving nature of digital content

Posted October 12, 2022 - Updated January 12, 2024
Illustration symbolizing the evolving nature of user generated content, featuring a megaphone emitting numerous types of media

For companies across all industries, the ever-growing world of digital content poses a distinct problem: How can businesses preserve the integrity and reputations of their brands in the context of user-generated content (UGC)?

A study of customer shopping habits conducted by online technology company Stackla found that for 80% of customers, user-generated content “highly impacts their purchasing decisions,” while 88% say authenticity — the kind that comes from UGC — is important to them when they’re deciding which brands to support.

User-generated content benefits brands in that it can provide objective, unbiased information to potential buyers online. At the same time, not all UGC is positive, and additional research shows that almost half of consumers will “lose all trust in a brand” if they’re exposed just once to toxic or fake online content. Some 40% of consumers will disengage from a brand’s community after such an exposure as well. Threatening posts in a community forum can reflect badly on a brand, as can placing digital ads next to graphic images or extremist content.

Content moderation best practices: Maintaining brand trust in a digital world

Learn the importance of content moderation, how to choose the right method and the various ways to measure your success.

Download the report

Advertisers have long faced these kinds of issues when marketing online. Eric Goldman, associate dean for research, professor of law and co-director of the High Tech Law Institute at Santa Clara University School of Law, notes that “the onus is on advertisers to supervise their partnerships” and “know the partners they’re working with.” For brands engaging with consumers in online forums and on social media, it’s just as important to know the environment and monitor content.

In order to protect themselves against negative UGC, businesses must first learn to navigate the evolving digital space. Understanding how technology, language and behavior all play a role in shaping digital content is a good place to start.

Technology

In large part, the rise of digital and user-generated content has been driven by increased access to the internet. According to Statista, more than 63% of the global population is now online, amounting to approximately five billion individuals, with 59% on social media. Mobile phones, faster internet access and the desire to spend more time online connecting with friends and family, shopping, researching and gaming have coalesced to create more UGC for brands to monitor.

Generative AI (GenAI) — a category of artificial intelligence that focuses on creating new and original content such as text, images, audio, video, code or synthetic data — is accelerating the pace of UGC even more than before. The widespread accessibility of GenAI tools has added a layer of complexity to content moderation, said David Rickard, partner, business process services at Everest Group, in a webinar titled The Evolved Trust and Safety Industry and What to Expect Next. “The fact that users are getting access to more advanced technologies can increase the quality of and the speed that content is being put out there in the market,” he explained.

GenAI empowers users and businesses with incredible creative possibilities, but it is important to note that the core issue lies not in AI-generated content itself, but in its potential misuse. The creation of fake and misleading content by malicious actors is a challenge that predates the widespread adoption of GenAI. However, GenAI’s ability to rapidly produce high-quality text, images, videos and even music at scale, makes the creation of deceptive information easier and can amplify the potential for its rapid dissemination.

For instance, AI-generated text can be exploited to craft convincing news articles in just seconds, contributing to the spread of misinformation. It can also be harnessed to create sophisticated spam and phishing messages, deceiving users and compromising online security. Furthermore, the emergence of deepfake videos can make it appear as though individuals are saying or doing things they never actually did.

As GenAI continues to grow in adoption, and users harness the technology with greater proficiency, the responsibility of brands to fortify content moderation strategies becomes paramount to ensure the integrity and safety of online spaces.

Language

As UGC continues to evolve, how we communicate online does, too. For instance, we have seen the growing use of algospeak — an ever-evolving collection of codewords, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended. Borne from a need to outwit content moderation powered by artificial intelligence (AI) — algospeak has become part of everyday online vernacular. Many users have become increasingly creative in their efforts in this regard, posing new challenges for brands to quickly and accurately detect these instances.

“As discussions of major events are filtered through algorithmic content delivery systems, more users are bending their language,” wrote The Washington Post in a piece about how algospeak is reinventing online language in real-time. “But as algospeak becomes more popular and replacement words morph into common slang, users are finding that they’re having to get ever more creative to evade the filters.” In other words, consumers are savvy enough to change their approach — which leaves brands constantly trying to keep up.

Even the increasing number of traditional languages that are growing in use online can keep brands on their toes. As of early 2023, English, Russian and Spanish are the most popular languages for web content, followed by French, German and Japanese. Moreover, the rise of hybrid languages like ‘Hinglish,’ the merging of Hindi and English, in everyday conversations are also contributing to brands’ ongoing content moderation challenges. From a customer experience perspective, this means it’s more important than ever for companies to have the ability to review content in a variety of languages, and also to make customer care available for everyone regardless of location.

Behavior

DataReportal’s Digital 2023: October Global Statshot, published in partnership with We Are Social and Meltwater, shows that internet users aged 16 to 64 spent close to seven hours a day online and more than two hours on social media. Digital content has become a major part of consumers’ lives, and as a result, brands have had to adapt and work increasingly harder to keep their platforms free of inappropriate content. From CSAM (child sexual abuse materials) to violence, personal threats and doxxing, it’s up to brands to provide a safe and positive online environment and a consistent customer experience.

AI designed to identify inappropriate behavior, along with user reporting tools, can help. Through diverse, accurate and constantly refreshed datasets, AI can help identify and remove banned content and activity — but because users are so quick to adapt, brands can’t rely on keyword filtering alone. Enlisting the help of human content moderators and incorporating a human-in-the-loop approach enables brands to review materials against set criteria and shut down bad behavior. Humans are better equipped to understand the nuances and context of posts, and flag banned content for review and removal. While advanced AI can learn to read contextual cues and even accommodate multiple languages, human oversight ensures brands are covering all their bases.

That said, brands must also consider how all of that content can impact their frontline customer care teams. Content moderators are sometimes exposed to negative, aggressive and harmful content. Despite the intensive training they receive, the ongoing support from managers and the technologies employed to diminish the impact and frequency of the negative content, it can still affect content moderators’ well-being. Brands must support their teams by hiring an expert on workplace wellness to provide programs and resources designed to protect, prevent, educate and empower.

“There are a lot of terrible things online, but there are great things from consumers, too,” Goldman says of user-generated content. Content moderation is crucial to helping companies make the most of that influential content while also preserving their brand reputation. The more you understand about digital content and how it continues to evolve, the better equipped you’ll be to harness it and ensure a more positive customer experience.

This article is part of a four-part series on content moderation. Check out our other articles on the increasing sophistication of AI, wellness strategies for content moderators and content moderation regulations.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more