1. Insights
  2. Trust & Safety
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

Taking account: Why brands need to invest in profile moderation

Posted January 31, 2023 - Updated December 27, 2023
Illustration of laptop depicting a user profile, as well as some additional iconography like a shield and lock to convey safety

In face-to-face interactions, getting a sense of another person is relatively straightforward. Over time, by observing verbal and non-verbal cues, you can start to gauge their authenticity and character.

But making such an assessment online isn't so simple. Whether you're swiping through a dating app or scrolling through your favorite social media feed, it's difficult to discern who you're interacting with and what their motivations are. In pursuit of a greater understanding, you might take a look at someone's online profile.

Despite being restrictive in terms of set fields and word counts, a profile is still a place where people can input user-generated content (UGC). What a person enters will give clues about their authenticity, intentions and respect for guidelines. You might find imposters. You might find fraudsters. And you might find those who publicly display hate or other offensive material.

It is for these reasons that brands have a responsibility to take profile moderation seriously. By making an effort to ensure users are verified and adhering to community or platform guidelines, brands can prove that they have the trust and safety of their community members in mind. In a TELUS International survey, 70% of respondents said brands need to protect users from toxic content, and 78% feel it's a brand's responsibility to provide positive and welcoming online experiences.

For brands, there's a high cost for missing the mark. More than four in 10 survey respondents said all it takes is one incident of toxic or fake UGC to walk away from a brand's community. Worse, nearly half (45%) said it would make them lose all trust in a brand.

These findings highlight the importance of online identity protection and social profile moderation. Here's how it works, and how brands can do it more effectively.

Key elements of profile moderation

While users are becoming more adept at detecting suspicious behavior, they should not be expected to carry that burden alone. Brands have an important responsibility in managing digital content on their platforms, including questionable profiles — both for the safety of their users, and to protect their overall reputation.

In October 2022, LinkedIn added an "about this profile" feature, which enables users to see when a profile was created and last updated, and identifies if the user had verified a phone number or work email. The feature is part of a wider online identity protection plan to help community members make more informed decisions about the people with whom they are connecting. It taps into a central tenet of user profile safety: authentication.

For brands, authentication is the frontline of community protection. It comes in several forms, like verification badges or having users verify their email or phone number before joining a community. It's especially important on platforms with age restrictions like dating apps where interactions may lead to in-person meetings.

But the scope is wider still: Profile moderation can play an important role in the fight against fraud. By helping to weed out catfishers — those who attempt to deceive for personal gain — and other fraudsters, authentication techniques can thwart those looking to hook people with fake messages or phony profiles.

And while profile moderation is critical to combat subtle, deceptive threats, it is also necessary for handling more overt violations. All UGC should comply with community guidelines and government regulations, and profiles are no exception. That means brands have a responsibility to ensure members aren't showcasing violative behavior or content.

These checkpoints are critical; according to the TELUS International survey, 54% of respondents said toxic UGC is on the rise, and more than a third (36%) of respondents said they encounter inaccurate, fake or toxic UGC multiple times a day. Having a robust profile moderation plan in place enables brands to quickly flag and remove profile descriptions or tags that don't align with community guidelines.

Key considerations of profile moderation

Many brands are leveraging technology powered by artificial intelligence (AI) to authenticate community members and protect their users.

Bumble, an online dating app, uses AI to blur out nude photos (also known as "cyberflashing") to protect its users from unsolicited pictures. LinkedIn uses a deep learning-based model to proactively check profile photo uploads and determine if images are AI-generated, which helps block fake accounts. In June 2022, the professional networking site said that its AI tool had led to a 19% increase in the removal of fake accounts before a user reported them as fake.

Despite the capabilities of AI, it is important to adopt a human-in-the-loop approach for the safety of your community members. While AI is becoming increasingly capable in screening mass volumes of UGC with relative success, human beings can pick up on what the technology might miss. After all, language is nuanced, which becomes especially clear in online communities that span regions and cultures. A word or gesture that means something offensive in one culture may be innocuous for another. Understanding nuance and various contexts, as well as mitigating bias in AI-powered content filters, requires a diverse team of content moderators and data annotators.

The significance of human moderators is underlined when you consider the ways users attempt to circumvent automated forms of moderation. Algospeak, a portmanteau of "algorithm" and "speak", refers to code words, emojis or deliberate typos that social media users adopt in order to get past content moderation filters. A survey by TELUS International found 51% of respondents had seen algospeak on social media, moderated forums, gaming communities and brand websites. Since the use of algospeak is all about dodging detection from AI, it is on human moderators to decipher the content and act accordingly. It is important to note, however, that algospeak isn't inherently negative; it can allow people in marginalized communities to speak about things that might be deemed controversial.

Since community guidelines can vary from brand to brand, understanding algospeak is a key consideration for profile moderation. Brands must first understand how algospeak fits within their content guidelines and establish their tolerance levels. From there, a human-in-the-loop approach is essential to evaluate the instances that evade algorithmic detection and consider the context.

Protecting the protectors

Brands are facing a challenging environment amid the rise of toxic UGC, misinformation, disinformation and hateful speech. They have to intervene — but there can be a cost to that intervention.

A comprehensive wellness program that protects content moderators' mental and physical health is an essential component of any trust and safety operation. This can include setting up resilience plans, ensuring content moderators have access to mental health resources and working with experts to help team members recognize signs of distress.

In order to create communities that people want to be a part of, brands must make an effort to maintain a safe and welcoming environment. There is a responsibility to moderate UGC, and by extension, the profiles of community members. The task at hand is only likely to become more complex and voluminous over time, which underlines the value that can be gained through an effective partnership. If you're looking to adapt to developments in content moderation and scale your trust and safety operation to maintain customer loyalty, contact us today.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more