1. Insights
  2. Trust & Safety
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

A look at content moderation regulations around the globe

Posted December 15, 2022 - Updated January 12, 2024
Illustration of books, a scale, a person and various other items meant to symbolize content moderation regulation and legislation

Over the years, social media has become a fixture in our everyday lives. From sharing photos and videos to leaving product reviews, the volume of user-generated content (UGC) continues to increase exponentially. So much so that research from Social Media Today has found that UGC on social platforms accounts for 39% of the media hours Americans consume each week, and according to Statista, one million hours of content are streamed by users worldwide in an internet minute.

In addition to posting more frequently, many individuals have also become accustomed to expressing their views on politics, social justice, climate change or other hot-button issues more openly in online forums. This in turn has created a more complex challenge for social platforms and brands to effectively monitor what could be harmful, toxic and disturbing content.

These factors have been part of the impetus for governments around the world to propose and establish legislation to better regulate social media in order to protect the safety and well-being of all users. These regulations vary from country to country and have real and significant implications, including a call for increased content moderation and hefty fines for non-compliance. Here we highlight a few that have been making recent headlines.

IDC Info Snapshot - content moderation hero

Expanding content moderation capabilities helps fast-growing brands earn customer trust

In this IDC Info Snapshot, discover the benefits of engaging an experienced CX partner for content moderation to safeguard your online communities.

Download the guide

European Union: Digital Services Act

The Digital Services Act (DSA) aims to establish accountability standards in the region for online platforms. These regulations include, but are not limited to, immediate takedown of illegal content, removal of illegal products and services from online marketplaces, mandatory risk assessments of algorithms, disclosing the results of AI-driven automated decision-making and requiring platform operators to report data regarding their content moderation practices and risk assessments for increased transparency. Breaches of the law can result in companies receiving extensive fines or being restricted from operating in the EU. The deadline for all platforms to comply with the measures necessary under the DSA is in early 2024, but the act has been applicable to larger platforms (those with more than 45 million users) since August 2023. Formal investigations for Digital Services Act violations have already begun.

European regulation on terrorist content online

(EU) 2021/784 provides rules for the removal of online terrorist content for hosting service providers.

The hosting service provider is not obligated to search its websites for terrorist content, instead member states designate an authority to issue and impose "removal orders." Providers then must remove terrorist content as soon as possible but no later than within one hour of receipt of an order.

United States: Communications Decency Act reform

In the U.S., there are calls from both sides of the aisles for reform of the Communications Decency Act (CDA) — which governs content moderation practices — and States are attempting to take matters into their own hands. A landmark law in its day, and the backbone of the internet and social media, the CDA was enacted to regulate internet content premised on freedom of the internet. Section 230 provides immunity to platform operators for content that is published by individual users: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Fast forward almost three decades and that provision is what allows social media companies to define their terms of service, develop and execute their content moderation practices, and prevents those same companies from being sued for content that is either posted by their users or removed by the company.

It's hard to imagine the internet and social media without the CDA. Companies would be unwilling to take on the risk of allowing users to post content on their platforms if they could be held liable. They would also not take on the risk of moderating content if those decisions could be challenged in court on a case by case basis.

To put this into perspective and underscore the importance of the CDA's immunity clause, according to cloud software company Domo, for every one minute of the day in 2022, users shared 66,000 photos on Instagram, posted 1.7 million pieces of content to Facebook, uploaded 500 hours of video to YouTube and sent 2.43 million Snapchats. The scale of content moderation decisions is immense. Within a 30-minute timeframe, Facebook takes down over 615,000 pieces of content, YouTube removes more than 271,000 videos, channels and comments and TikTok takes down nearly 19,000 videos.

Some believe that the best way to address concerns about current content moderation regulations in the U.S. would be to do so through reform to the immunity clause of the CDA also known as Section 230. However, certain states have recently begun trying to circumvent federal immunity and/or impose greater obligations on hosting service providers at the state level, which, if upheld, would create a fractured legal landscape.

The argument around First Amendment rights

The Supreme Court of the United States (SCOTUS) will hear five cases during its current term regarding content moderation and whether First Amendment protections should extend to social media posts. As Lynn Greensky, professor emeritus of communication and rhetorical studies at Syracuse University, explains in an article published by The Conversation, “Courts have long held that public spaces like parks and sidewalks, are public forums, which must remain open to free and robust conversation and debate, subject only to neutral rules unrelated to the content of the speech expressed.”

The cases before the Supreme Court present questions about whether digital spaces should be considered the equivalent to public forums, and in turn, should be protected by First Amendment rights. For example, these cases are poised to have SCOTUS decide whether blocks constitute state action, whether, and to what extent, the Executive Branch may take steps to curb misinformation on platforms, and whether laws regulating a platform’s content moderation decisions violate the First Amendment. The outcome of the cases will significantly shape the landscape of free speech in the digital age.

How much is too much regulation?

It's clear that governments globally are grappling with how best to regulate digital content and it has some experts stating that a blanket approach wouldn't be the right strategy or solution. "If we have G-rated movies, PG-13 movies, etc., should you also be able to have a variety of community standards?" asks Robert Zafft, business ethics, compliance and governance expert and author of The Right Way to Win: Making Business Ethics Work in the Real World.

As brands have different comfort levels and policies when it comes to consumer content, Zafft believes a more customized approach to banning users and terminating UGC channels is worthy of consideration. He presents a scenario in which extreme content that's "beyond the pale" would always be banned, but individual online communities would have the option to choose, for example, whether or not to allow profanity. Zafft notes that while there may be additional reputation management required on the part of individual brands in this type of approach, it is worthy of consideration.

Another plausible approach would be to establish an independent third party to determine baseline guidelines for the removal of content, and require social media platforms to provide transparency into how they determine which "harmful but legal" posts to remove. "Choosing to manage content moderation alone is not the optimal approach," says Zafft. "What brands should consider doing is working alongside an experienced and reputable partner to oversee this important work."

There's no question that the need for content moderation is real. "Very few people have made a successful business by offending their customers," Zafft says. "Rather than sitting idly by while brands adopt poor content moderation strategies that negatively impact the customer experience, it's likely that the legislators will create an outer fence ring to guide companies and help them follow best practices.”

On the horizon

While different countries are taking different approaches to the regulation of content moderation, it's broadly recognized that both human content moderators and AI are needed to work hand in hand in order to moderate content at scale and comply with the different applicable regulations around the world.

While undoubtedly challenging, it'll be crucial for governments to continue to forge ahead on legislation that ensures a proper balance between the rights to freedom of expression and information, while protecting the safety of society at large and the companies that facilitate the modern internet as we know it.

As legislation continues to evolve, it is prudent for platform operators to work with an experienced global content moderation services provider that is familiar with this broad landscape. Staying current and knowledgeable on laws in their regions and beyond can help platform operators ensure they're abiding by existing guidelines to protect both their customers and their brand, while equipping them with the foresight to pivot quickly to scale in this ever-changing industry. If you're looking for help navigating the world of content moderation, get in touch with our team of experts.

This article is part of a five-part series on content moderation. Check out our other articles on the evolving nature of digital content, the increasing sophistication of AI, andwellness strategies for content moderators.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more