1. Insights
  2. Trust & Safety
  3. Article
  • Share on Twitter
  • Share on Linkedin
  • Share on Facebook
  • Share via email

A look at content moderation regulations around the globe

Posted December 15, 2022
Illustration of books, a scale, a person and various other items meant to symbolize content moderation regulation and legislation

Over the years, social media has become a fixture in our everyday lives. From sharing photos and videos to leaving product reviews, the volume of user-generated content (UGC) continues to increase exponentially. So much so that recent research from Social Media Today has found that UGC on social platforms accounts for 39% of the media hours Americans consume each week, and according to Statista, one million hours of content are streamed by users worldwide in an internet minute.

In addition to posting more frequently, many individuals have also become accustomed to expressing their views on politics, social justice, climate change or other hot-button issues more openly in online forums. This in turn has created a more complex challenge for social platforms and brands to effectively monitor what could be harmful, toxic and disturbing content.

These factors have been part of the impetus for governments around the world to propose and establish legislation to better regulate social media in order to protect the safety and well-being of all users. These regulations vary from country to country and have real and significant implications, including a call for increased content moderation and hefty fines for non-compliance.

IDC Info Snapshot - content moderation hero

Expanding content moderation capabilities helps fast-growing brands earn customer trust

In this IDC Info Snapshot, discover the benefits of engaging an experienced CX partner for content moderation to safeguard your online communities.

Download the guide

Europe: shaping tech policy globally

The European Union (EU) has been a leader in drafting a number of internet and social media laws. This includes regulations around content moderation such as removing illegal and harmful content, creating an appeals and review process for content moderation decisions, imposing transparency obligations on platforms' terms of service and content moderation decisions taken in respect thereto and regulations regarding algorithms used to scale content moderation practices.

Digital Services Act

In July 2022, the Council of the European Union approved the Digital Services Act (DSA) to establish accountability standards in the region for online platforms and more extensive obligations on "very large" platform operators (i.e. those with more than 45 million monthly active users). These regulations include, but are not limited to, immediate takedown of illegal content, removal of illegal products and services from online marketplaces, mandatory risk assessments of algorithms, disclosing the results of AI-driven automated decision-making and requiring platform operators to report data regarding their content moderation practices and risk assessments for increased transparency. Notably, this law will apply to all providers operating in the EU regardless of their place of establishment, meaning any individuals residing in the EU will be fully protected under the DSA's scope. The Act is being implemented in stages and companies will have until January 1, 2024 to comply.

Network Enforcement Act

Moreover, the DSA effectively replaces Germany's Network Enforcement Act (NetzDG), which came into effect in 2018, obligating social media platforms with more than two million users in Germany to promptly remove illegal content (e.g. defamation, hate speech, incitement to commit crimes and slander). Prior to the DSA, NetzDG was described in a report by Transatlantic Working Group as "arguably the most ambitious attempt by a Western state to hold social media platforms responsible for combating online speech deemed illegal under the country's domestic laws." For example, under the law, social networks had one day to delete "manifestly unlawful" content and seven days to investigate and delete illegal content that is uncertain on its face. Non-compliance resulted in fines of up to €50 million, or well over $51 million.

European Union: the Obligation to Remove Online Terrorist Content Within One Hour

For those who felt one day to remove unlawful content was challenging, on June 7, 2022, Regulation (EU) 2021/784, which provides rules for the removal of online terrorist content for hosting service providers, entered into force in the European Union.

The hosting service provider is not obligated to search its websites for terrorist content, instead member states designate an authority to issue and impose "removal orders." Providers then must remove terrorist content as soon as possible but no later than within one hour of receipt of an order.

United States: where legislation differs from state to state

In the U.S., regulations are currently being decided and implemented at the state level, which has created a fractured legal landscape and resulted in legal challenges. More than 250 bills have been introduced across all 50 states since 2020, according to the Computer & Communications Industry Association (CCIA). Three laws, which passed, bear highlighting:

1. A.B. 587

In California, A.B. 587 requires social media companies to publicly post their terms of service on their platforms and report data on their enforcement of these policies twice per year to the state's attorney general's office.

2. HB 20

Meanwhile in Texas, the HB 20 law bars social media platforms with more than 50 million users from censoring any content specifically based on viewpoints and geographic location. The platforms can take down content if it includes unlawful expression or specific discriminatory threats of violence, but the lines differentiating the two can be blurry. As a result, the law was challenged; however, the Fifth Circuit Court of Appeals allowed the law to go into effect.

3. S.B. 7072

Similarly in Florida, S.B. 7072 requires companies to use the same criteria across their platforms when deciding to take down a post or remove an account, and bars them from removing the account of any "journalistic enterprise" or political candidate within the state. The law allows social media companies to be sued by the state's attorney general for violating these hosting rules, as well as by private citizens who feel they were unjustly censored. Yet, unlike the Fifth Circuit Court of Appeals who upheld the Texas social media law, the Eleventh Circuit struck down the Florida social media law holding that it was unconstitutional, creating what is commonly referred to as a "circuit split" — when two or more circuits in the U.S. Court of Appeals reach different decisions on the same legal issue.

Netchoice, a coalition of digital platform providers (and the unsuccessful party in the constitutional challenges to the Texas social media law) petitioned the Supreme Court of the United States (SCOTUS) to review the case. Likewise, the State of Florida has requested that SCOTUS review the decision of the Eleventh Circuit. SCOTUS has not indicated one way or another whether it will grant the requests for review; however, the existence of the circuit split makes it more likely that it will.

The Communications Decency Act: U.S. Congress's first attempt to federally regulate material on the internet

To prevent legislation inconsistencies, some believe that the best way to address concerns about current content moderation regulations in the U.S. would be to do so at the federal level, specifically through reform to Section 230 of the Communications Decency Act (CDA), which was established in 1996.

A landmark law in its day, and the backbone of the internet and social media, the CDA was enacted to regulate internet content premised on freedom of the internet. Section 230 provides immunity to platform operators for content that is published by individual users: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Fast forward 25 years and that provision is what allows social media companies to define their terms of service, develop and execute their content moderation practices, and prevents those same companies from being sued for content that is either posted by their users or removed by the company.

It's hard to imagine the internet and social media without the CDA. Companies would be unwilling to take on the risk of allowing users to post content on their platforms if they could be held liable. They would also not take on the risk of moderating content if those decisions could be challenged in court on a case by case basis. Operators of internet services are not publishers (and thus not legally liable for the words of third parties who use their services).

To put this into perspective and underscore the importance of the CDA's immunity clause, according to Domo, for every one minute of the day in 2021, users shared 65,000 photos on Instagram and 240,000 photos on Facebook, sent 668,000 messages on Discord and two million Snapchats. The scale of content moderation decisions is immense. Within a 30-minute timeframe, Facebook takes down over 615,000 pieces of content, YouTube removes more than 271,000 videos, channels and comments and TikTok takes down nearly 19,000 videos.

How much is too much regulation?

It's clear that governments globally are grappling with how best to regulate digital content and it has some experts stating that a blanket approach wouldn't be the right strategy or solution. "If we have G-rated movies, PG-13 movies, etc., should you also be able to have a variety of community standards?" asks Robert Zafft, business ethics, compliance and governance expert and author of The Right Way to Win: Making Business Ethics Work in the Real World.

As brands have different comfort levels and policies when it comes to consumer content, Zafft believes a more customized approach to banning users and terminating UGC channels is worthy of consideration. He presents a scenario in which extreme content that's "beyond the pale" would always be banned, but individual online communities would have the option to choose, for example, whether or not to allow profanity. Zafft notes that while there may be additional reputation management required on the part of individual brands in this type of approach, it is worthy of consideration.

Another plausible approach would be to establish an independent third party to determine baseline guidelines for the removal of content, and require social media platforms to provide transparency into how they determine which "harmful but legal" posts to remove. "Choosing to manage content moderation alone is not the optimal approach," says Zafft. "What brands should consider doing is working alongside an experienced and reputable partner to oversee this important work."

There's no question that the need for content moderation is real. "Very few people have made a successful business by offending their customers," Zafft says. "Rather than sitting idly by while brands adopt poor content moderation strategies that negatively impact the customer experience, it's likely that the legislators will create an outer fence ring to guide companies and help them follow best practices.”

On the horizon

While different countries are taking different approaches to the regulation of content moderation, it's broadly recognized that both human content moderators and AI are needed to work hand in hand in order to moderate content at scale and comply with the different applicable regulations around the world.

While undoubtedly challenging, it'll be crucial for governments to continue to forge ahead on legislation that ensures a proper balance between the rights to freedom of expression and information, while protecting the safety of society at large and the companies that facilitate the modern internet as we know it.

As legislation continues to evolve, it is prudent for platform operators to work with an experienced global content moderation services provider that is familiar with this broad landscape. Staying current and knowledgeable on laws in their regions and beyond can help platform operators ensure they're abiding by existing guidelines to protect both their customers and their brand, while equipping them with the foresight to pivot quickly to scale in this ever-changing industry. If you're looking for help navigating the world of content moderation, get in touch with our team of experts.

This article is part of a five-part series on content moderation. Check out our other articles on the evolving nature of digital content, the increasing sophistication of AI, wellness strategies for content moderators and what's next for content moderation.

Be the first to know

Get curated content delivered right to your inbox. No more searching. No more scrolling.

Subscribe now