Same Same But Different: Content Moderation at Facebook, Twitter, TikTok, and Reddit

Discover the similarities and differences in content moderation policies of Facebook, Twitter, TikTok, and Reddit in this in-depth comparison.
mugshot Oscar Beijbom
Oct 2022

*Note: while we did our best to use the latest sources, some of the references used in this article are more than a year old at the time of writing

Content moderation on social media has had its share of controversy over the past few years.

As concerns about misinformation versus free speech continue to be a subject of debate one thing remains sure: content moderation, at least to a certain extent, is essential for large social media platforms with millions (or in some cases, billions) of users.

The moderation policies across these platforms differ somewhat according to their purpose and user base. This article takes a close-up look at four of Big social media sites: Facebook, Twitter, TikTok, and Reddit.

While there are obvious similarities between the four on basic policies (not tolerating violent, graphic, hateful content for example), there are some notable differences when it comes to more specific guidelines, which policies the platforms emphasize most, and what role the user base plays in the process of content moderation.

TikTok, for example, is the most popular of these platforms for Gen Z and teens, and its content is all video-based – so its moderation policies are much more detailed than Reddit’s, whose focus is on community and conversation.

TikTok, Facebook, and Twitter all outsource to 3rd party moderators, while Reddit uses a model that’s more or less self-moderated by volunteers in the role of community leaders – as a result, Reddit does not (generally) struggle with finding the time and resources to manage its content the way that Facebook and Twitter do.

While TikTok does not (yet) deal with the same level of political/misinformation content issues that Twitter and Facebook do, some are worried that TikTok will soon run into the same challenges. In fact, according to a recent experiment run by misinformation researchers, TikTok failed to 90% of ads featuring false or misleading information about elections.

Because Facebook and Twitter deal with so much political content (including misinformation) and because of the way the content on these platforms influences their user bases, they often have to make post hoc changes to their policies, leaving administrators in a continual state of catch-up.

To more easily compare the four platforms, we evaluated all of them based on the same criteria:

  • Content guidelines: A run-down of the basic “what is” and “what isn’t” allowed
  • Violation policies: How the platform deals with violation of its content and community rules
  • Community input: How involved the users are in the process of moderating any problematic content (for example, flagging and reporting)

In addition to these criteria, we’ve also included an overview of each platform that puts it into context with the others, including anything that sets it apart– including culture, niche, and user base demographics.

Facebook

Thumbs up buttons

With just shy of 3 billion users, Facebook is the largest social media platform – at least, for now.

In 2018, after the Guardian revealed Facebook’s long-secret guidelines, Facebook finally released its own version of its policies to the public. Today, the social media giant employs over 15,000 third-party content moderators in addition to using AI to catch, flag, and in some cases remove problematic content.

Facebook is gaining a reputation for being for “old people”, as its fastest-growing demographic is users age 65 and older – but Millennials continue to be the largest audience.

Content guidelines

In a 2019 case study comparing Facebook to Reddit and YouTube, Facebook was shown to have “by far the largest content moderation operation.”

Facebook has a list of “Community Standards” detailing the values they seek to foster along with the types of content they prohibit or monitor for, broken down into categories including “violence and criminal behavior,” “safety”, “objectionable content,” and “integrity and authenticity.”

Each section comes with a “policy rationale” explaining the guidelines in-depth, and problematic content is divided into two categories:

  • Content that’s not allowed
  • Content that requires additional information or context, content that needs to come with a warning, and/or content that should only be allowed for viewers age 18 or older

According to their official page for moderation policies, Facebook’s standards are based on “feedback from people and the advice of experts in fields like technology, public safety, and human rights.”

Violation policies

Facebook uses a system of prioritization to address content violation occurrences in order of how serious they are.

Most content violations on Facebook result in a “first strike”, i.e., a warning with no further restrictions. Additional strikes result in the user being blocked from accessing certain parts of the platform for a certain length of time, depending on how many strikes. Users have come to describe this phenomenon as being in “Facebook jail.”

Depending on the severity and nature of the content violations, users who receive more than 5 strikes may have their accounts disabled. Facebook also disables the accounts of dangerous people, such as convicted criminals or those misrepresenting their identity. Public figures who are creating civil unrest or other harmful activity can also have their accounts suspended.

In addition to sending warning messages to first-strike offenders, Facebook will also send you a message informing you each time your account has been suspended or some other disciplinary action has occurred.

Community input

Facebook allows its users to report content by providing a “Report” link at the top right corner of the post in question. Users are prompted to describe what it is about the post that they find concerning. The platform provides additional resources and information to users seeking to report child abuse, criminal activity, and other serious issues.

Twitter

Twitter cube

Although only the 15th most popular social media platform, Twitter continues to be very influential. 10% of Twitter users are responsible for 92% of Tweets, and many users turn to Twitter for their news or to follow their favorite influencers.

In order to better direct the quality of public Tweets, Twitter recently launched its Twitter Circle feature allowing users to Tweet in private groups. 24 to 35-year-olds are the largest user group for Twitter.

Content guidelines

Twitter’s guidelines, known as “the Twitter Rules”, are short and straightforward.

Violence, hateful conduct, abuse, and other distressing content are all categorized under “Safety,” with several shorter sections for “Privacy,” “Authenticity,” and “Third Party Advertising.”

Violation policies

Twitter has a range of actions it takes against inappropriate content, accounting for both the very mild instances as well as more blatant violations.

In the case of misleading information, Twitter will simply flag the content to inform users. Twitter may also limit the visibility of certain Tweets either by downranking them or removing them from top search results.

In more serious cases, Twitter will require the user to delete the objectionable Tweet before being able to post any new Tweets. Twitter sends an email to the user notifying them of the Tweet(s) in violation and which policies were violated. The user has the chance to appeal if they believe the review board made an error.

Twitter’s most severe enforcement action is to permanently suspend the user’s account and ban them from creating a new one. This has included several high-profile politicians as well as members of terrorists organizations such as ISIL and Al-Quaeda, and applied to the most serious instances of harassment, abuse and promotion of violence.

Community input

Similar to Facebook, Twitter users are able to report questionable content by selecting the option “Report Tweet” at the top of the Tweet in question. If the Tweet is in either the “harmful” or “abusive” category, the user will be asked to give more information, possibly along with additional posts from the same account so that the moderators have context to work with.

TikTok

Tiktok

Clocking in with over a billion users, TikTok is the current fastest-growing social media platform. It’s especially popular with Gen Z – nearly half of Gen Z uses TikTok (along with Instagram) rather than Google for search. It’s also by far the youngest of these platforms, having only been around since 2016.

After getting hauled into court for its first congressional hearing last year, TikTok updated its policies to add extra protection for teens, including LGBTQ youth at risk of bullying and harassment.

Content guidelines

At the top of TikTok’s community guidelines are Minor Safety, including the prohibition of any explicit content involving minors as well as grooming and other predatory behaviors. The guidelines also prohibit any content that encourages suicide, self-harm, eating disorders, dangerous challenges/behaviors, along with violent extremism, hateful content, nudity/adult content, and violent/graphic content.

Unsurprisingly, because of TikTok’s popularity with young users, its community guidelines are the longest and the most detailed of all these four platforms.

Violation policies

TikTok uses AI to either automatically remove content that violates its guidelines or defer to a Safety Team for additional review.

Similar to Facebook and Twitter, TikTok sends a warning to the user after the first violation and suspends the account after multiple multiple violations. Users are informed by the platform when their account has been suspended.

Violation of “zero tolerance” policies (such as child sex abuse content) results in an automatic ban.

Community input

TikTok users can report harmful content by clicking the “Report” option at the top corner of the video posted, similar to the format of Facebook and Twitter. Users can then select the reason for the content being inappropriate.

Reddit

Reddit

Reddit in many ways is the outlier of these four platforms when it comes to user experience and culture as well as policies. Unlike the others mentioned in this article, Reddit relies primarily on its community of volunteers for moderation.

Because its user base is formed of communities (subreddits), with individuals who are often very involved in their respective subreddits, Reddit has a strongly developed culture that the other three platforms lack; or perhaps more accurately, the platform is made up of (literally) millions of communities, each focused on a niche topic.

Reddit’s user base also has the narrowest demographics: over half of its users are male, and the majority of its users are white and college-educated%20and%20Instagram%20(65%25).

Content guidelines

Because Reddit is a giant community divided into subreddits, the platform’s general rules are short and sweet.

“Remember the human,” is the platform’s #1 rule, in reference to disallowing bullying, harassment, violence, and hate. Other general rules include privacy, authenticity, and abiding by the rules of each particular subreddit.

Beyond that, the rules on Reddit vary within each Reddit community and what the community moderator and members have agreed on.

Violation policies

Reddit delegates its content moderation to subreddit moderators. Moderators follow a basic Code of Conduct in addition to any rules specific to their own subreddit. Because Reddit expects and trusts its user base to be rule-abiding and self-moderated, administrators only get involved when mods fail to enforce rules.

Community input

Visibility in the Reddit community relies on upvoting (rather than thumbs up-ing) user comments to maintain quality and keep the conversation relevant.

Posts have a “report” button that users can click if they find a post problematic, or they can contact the Reddit help center. Mods within each subreddit tend to be very proactive at moderating and deleting comments that are irrelevant or inappropriate.

Conclusion

Social media platforms, by nature, have many similarities in their content moderation policies (and the way those policies are enforced).

At the same time, the four platforms mentioned in this article experience different challenges and emphasize their policies differently based on their users and the way the platform itself is set up.

Here is a summary of the “big picture” differences we found between Facebook, Twitter, TikTok and Reddit, when it comes to content moderation:

  • TikTok is especially popular with youth and Gen Z, and so its moderation policies are geared towards especially young and vulnerable demographics. It’s also the youngest of the four platforms and so it (and the challenges it faces) have not evolved as much yet.
  • Facebook and Twitter continue to be widely used across varying demographics, but over time they’ve come to experience challenges in balancing free speech vs. rampant spread of misinformation.
  • Reddit, unlike the other three, is mostly self-moderated by a community of volunteers. Because it’s designed for meaningful discussion rather than as a platform for influencers, users more often tend to respect community standards and keep each other in check.

Want to build your own classifier in just minutes?

You get 1,000 classifications a month for free. Pay only when you scale. No credit card required.

Start for free Get a demo