- Trust & Safety
The evolving nature of digital content
For companies across all industries, the ever-growing world of digital content poses a distinct problem: How can businesses preserve the integrity and reputations of their brands in the context of user-generated content (UGC)?
A study of customer shopping habits conducted by online technology company Stackla found that for 80% of customers, user-generated content “highly impacts their purchasing decisions,” while 88% say authenticity — the kind that comes from UGC — is important to them when they’re deciding which brands to support.
User-generated content benefits brands in that it can provide objective, unbiased information to potential buyers online. At the same time, not all UGC is positive, and additional research shows that almost half of consumers will “lose all trust in a brand” if they’re exposed just once to toxic or fake online content. Some 40% of consumers will disengage from a brand’s community after such an exposure as well. Threatening posts in a community forum can reflect badly on a brand, as can placing digital ads next to graphic images or extremist content.
Content moderation best practices: Maintaining brand trust in a digital world
Learn the importance of content moderation, how to choose the right method and the various ways to measure your success.
Advertisers have long faced these kinds of issues when marketing online. Eric Goldman, associate dean for research, professor of law and co-director of the High Tech Law Institute at Santa Clara University School of Law, notes that “the onus is on advertisers to supervise their partnerships” and “know the partners they’re working with.” For brands engaging with consumers in online forums and on social media, it’s just as important to know the environment and monitor content.
In order to protect themselves against negative UGC, businesses must first learn to navigate the evolving digital space. Understanding how technology, language and behavior all play a role in shaping digital content is a good place to start.
In large part, the rise of digital and user-generated content has been driven by increased access to the internet. According to Statista, more than 63% of the global population is now online, amounting to approximately five billion individuals, with 59% on social media. Mobile phones, faster internet access and the desire to spend more time online connecting with friends and family, shopping, researching and gaming have coalesced to create more UGC for brands to monitor.
Virtual worlds such as the metaverse must now be a major consideration as well. Goldman says some of the questions surrounding how brands should engage with consumers in these spaces were first raised when online virtual world Second Life grew in popularity in the late 2000s. Still, as the reach and complexity of virtual and augmented reality continues to grow, the parallel digital worlds present a whole new universe for brands to monitor, as well as an opportunity to build greater connections and engagement with consumers.
Goldman adds that it would be “overkill” for brands to worry about every ordinary, organic interaction that takes place in the metaverse, noting, “You can’t supervise or control those conversations any better than you can in the offline world.” At the same time, brands must ensure they are present in that environment to watch for red flags. That includes monitoring metaverse activity to make sure any interactions there are happening in a “brand-positive” way and removing inappropriate content as needed.
As UGC continues to evolve, how we communicate online does, too. For instance, we have seen the growing use of algospeak — an ever-evolving collection of codewords, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended. Borne from a need to outwit content moderation powered by artificial intelligence (AI) — algospeak has become part of everyday online vernacular. Many users have become increasingly creative in their efforts in this regard, posing new challenges for brands to quickly and accurately detect these instances.
“As discussions of major events are filtered through algorithmic content delivery systems, more users are bending their language,” wrote The Washington Post in a piece about how algospeak is reinventing online language in real-time. “But as algospeak becomes more popular and replacement words morph into common slang, users are finding that they’re having to get ever more creative to evade the filters.” In other words, consumers are savvy enough to change their approach — which leaves brands constantly trying to keep up.
Even the increasing number of traditional languages that are growing in use online can keep brands on their toes. As of early 2020, English, Chinese and Spanish are the most-spoken languages online, followed by Arabic, Indonesian/Malaysian and Portuguese. Moreover, the rise of hybrid languages like ‘Hinglish,’ the merging of Hindi and English, in everyday conversations are also contributing to brands’ ongoing content moderation challenges. From a customer experience perspective, this means it’s more important than ever for companies to have the ability to review content in a variety of languages, and also to make customer care available for everyone regardless of location.
DataReportal’s Digital 2022: April Global Statshot, published in partnership with We Are Social and Hootsuite, shows that internet users aged 16 to 64 spent close to seven hours a day online in 2021 and more than two hours on social media. Digital content has become a major part of consumers’ lives, and as a result, brands have had to adapt and work increasingly harder to keep their platforms free of inappropriate content. From CSAM (child sexual abuse materials) to violence, personal threats and doxxing, it’s up to brands to provide a safe and positive online environment and a consistent customer experience.
AI designed to identify inappropriate behavior, along with user reporting tools, can help. Through diverse, accurate and constantly refreshed datasets, AI can help identify and remove banned content and activity — but because users are so quick to adapt, brands can’t rely on keyword filtering alone. Enlisting the help of human content moderators and incorporating a human-in-the-loop approach enables brands to review materials against set criteria and shut down bad behavior. Humans are better equipped to understand the nuances and context of posts, and flag banned content for review and removal. While advanced AI can learn to read contextual cues and even accommodate multiple languages, human oversight ensures brands are covering all their bases.
That said, brands must also consider how all of that content can impact their frontline customer care teams. Content moderators are sometimes exposed to negative, aggressive and harmful content. Despite the intensive training they receive, the ongoing support from managers and the technologies employed to diminish the impact and frequency of the negative content, it can still affect content moderators’ well-being. Brands must support their teams by hiring an expert on workplace wellness to provide programs and resources designed to protect, prevent, educate and empower.
“There are a lot of terrible things online, but there are great things from consumers, too,” Goldman says of user-generated content. Content moderation is crucial to helping companies make the most of that influential content while also preserving their brand reputation. The more you understand about digital content and how it continues to evolve, the better equipped you’ll be to harness it and ensure a more positive customer experience.
This article is part of a five-part series on content moderation. Check out our other articles on the increasing sophistication of AI, wellness strategies for content moderators, content moderation regulations and what's next for content moderation.