The Long Tail of Content Moderation Use-Cases

Beyond Spam, NSFW and violence are a long tail of situations that require content moderation across gaming, dating, e-commerce and children's platforms.
mugshot Oscar Beijbom
Sep 2022

As Dylan Jhaveri of Mux.com put it, ”You either die an MVP or live long enough to build content moderation.”

Content moderation is the processing of reviewing content to determine whether or not it’s appropriate for your platform. Spam, NSFW, and violence are some of the most obvious examples – but there are many more.

In this blog post we’ll explore these less obvious, “long tail” types of content moderation situations.

Gaming

Gaming Use-Cases

Profanity

While it’s typical for most gaming sites to allow for “normal” profanity, you will want measures in place to block certain inappropriate language – such as racist, sexist, or otherwise demeaning comments.

A more generic flag-and-remove system can prevent this from happening, but it may also remove certain phrases that are on-brand. Keep in mind, meanwhile, that gamers are very creative with getting around the rules and coming up with new ways to spell.

A content moderation service that allows for advanced and customizable options will solve your inappropriate language issues while keeping your platform on-brand and enjoyable for players.

Harassment/cyberbullying

It’s not uncommon for players to harass and verbally abuse others through in-game chats. Filters for profanity as well as abusive or threatening content are critical to keep this from happening in the first place.

Self-promotional content

Users may take advantage of their access to user-generated content (UGC) to share irrelevant content that is self-promotional or even propagandistic. If you’re using AI for content moderation, be sure to train it to detect this type of content.

Fake profiles

It’s important to have a vetting process in place for profile particulars, so that all user profiles are genuine and all profile images are appropriate. This will go a long way to eliminating fraudulent activity and spamming on your gaming site.

These days it’s easy for users to take screen-grabs of copyrighted materials and share them without permission. Copyright violations are very damaging to your brand, so be sure to have measures in place to block these attempts.

Dating

Dating Use-Cases

Profile particulars

Screening the profiles of your platform’s users upfront will save you more effort later on – since you’re effectively preventing fraudulent accounts. Have parameters in place both for the sharing of personal contact information as well as what sort of imagery you allow in the photos your users post.

Depending on your platform’s brand, you may have either looser or stricter requirements on things like acceptable levels of nudity. In either case you still need a robust solution, especially if your platform is very active and gets a lot of traffic.

Irrelevant/off-brand content

Some users may try to post content that is self-promotional, or otherwise distracting or irrelevant – most likely from a fake account. Other users may post content that is off-brand, offensive or degrading.

This kind of content can drive away other users and cause serious damage to your brand. To prevent this from happening, have strong systems in place for suspending user accounts as soon as they attempt to post this type of content.

Scams/phishing

Scams abound on the Internet and dating sites are no exception – in fact, they are a frequent target thanks to the number of users they have.

Phishing is when cyber criminals send emails from a legitimate-looking source in order to get the receiver to send credit card numbers or other private information. This happens when fake users are able to create profiles and get the emails and other contact info of other users. Users on dating websites can be especially susceptible to phishing, since they are generally open to sharing personal information in order to connect with others.

To prevent this from happening in the first place, you need a strong vetting system upfront when users submit their profile information. Be sure to have measures in place to verify these are real users and not fraudsters looking to take advantage of your platform.

Children’s Platforms

Platform for children Use-Cases

Cyberbullying

According to cyberbullying.org, about 46% of teens have experienced cyberbullying at some point in their lives. Online bullies typically post demeaning, hateful or threatening messages. They may even encourage suicide or other harmful behavior.

Sadly, a lot of cyberbullying goes unreported by other users. This is just one more reason it’s vital to have content moderation practices in place for this type of situation.

Solicitations

On the other end of the spectrum from cyberbullying are users who attempt to befriend and groom minors in order to exploit them – sexually or otherwise. They may pose as underage users who seem harmless at first. A common red flag to look for is users suggesting they take conversations offline.

Scams/Phishing

Adults aren’t the only ones who are bombarded with scams – children are even more vulnerable to misleading offers, and there are plenty of scams and phishing emails/texts designed specifically for unsuspecting minors.

Cyber criminals use children’s platforms to gather emails and other contact information in order to send scam/phishing emails and texts. Make sure you have secure measures in place to keep cyber criminals from gathering this type of personal information in the first place.

PII

Children aren’t as careful as adults about protecting their PII (personally identifying information) – whether it’s their home address, social security number or the name of the school they attend.

Without realizing it, they may post photos or other content that reveals this information – making them vulnerable to predators. Content moderation on children’s platforms should include features to keep users from making their PII visible.

E-Commerce/Online Marketplaces

Ecommerce Use-Cases

Disintermediation

Few things will hurt your business more than sellers who, after using your marketplace to find buyers, direct those buyers off your site to bypass your matchmaking fees.

This is disintermediation, and the best way to prevent it is to have measures in place that monitor and flag any time a user posts a link to their own website or other means of contact. You can then follow up to make sure the user is not trying to get customers to go outside your platform to buy their product.

Contextually appropriate imagery

When it comes to e-commerce sites, imagery issues like nudity can suddenly become a bit tricky.

This is because in certain contexts – think swimwear or other personal products, for example – it may not actually be inappropriate to reveal a certain amount of skin or other visual content that would normally be suggestive.

You will need a sophisticated AI service to help you differentiate between contextually appropriate and contextually inappropriate imagery.

Illegal products

The last thing you need as an e-commerce platform is illegal listings (think weapons, drugs, other suspect materials).

Make it clear on your platform what type of listings are not allowed, holding your users accountable. You also need a vigilant system in place for any users who violate this rule by posting an illegal listing.

Cultural context

Although not common, you may find yourself in a situation where the culture or religious context of one customer base demographic clashes with “mainstream” values. For example, in 2017 IKEA issued a catalog with products geared towards an ultra-conservative Jewish community. The catalog featured no women and as a result IKEA experienced a backlash on social media.

When moderating content, you need to consider sensitive situations like these and where your boundary should be when it comes to a demographic whose culture and values may be at odds with the majority.

Blogs and Communities

r/Earthporn Use-Cases
The absence of manmade structures qualifies this image as acceptable in the r/Earthporn subreddit

Off-topic content

Certain communities have very specific content guidelines in order to keep the conversations focused and the user-base engaged. Reddit has many such communities. For example, the “EarthPorn” subreddit doesn’t allow images with man-made structures; and “CatsStandingUp” requires all images to contain cats that are, well, standing up.

Making sure your content is on-topic is less about having blocklists for suggestive language and more about understanding your audience. Since online communities can have very unique criteria, you may need either the help of human moderators, or a custom AI solution (or both).

Trolling/abusive speech

Trolling and abusive speech is not limited to profanity. Users who are looking to start arguments, provoke negative reactions, and cause disturbances will quickly create a toxic atmosphere in your community if they’re not moderated. In the worst case, they will drive away other users permanently.

You’ll need a flagging system in place to detect these instances from the very beginning.

Misinformation

This one can be a slippery slope, but nevertheless it’s become a real issue in the content moderation world in recent years. One clear recent example is the COVID-19 pandemic, and the debate that emerged over whether questionable content should be allowed on platforms like Facebook and Twitter.

Whatever the topic, both well-meaning and nefarious users who post misleading or false information on important issues can quickly cause both harm and confusion within your community. You may choose not to block or delete every instance of misinformation, but flagging it will at least help keep your user base safer and more accountable.

In Summary

The type of UGC problems you experience will depend on the type of platform you run. Each platform has its own challenges, with some (such as dating and children’s platforms) having especially sensitive requirements.

While human content moderation is usually accurate, it’s very hard to scale content moderation with human efforts alone. The most realistic solution to scale and outsource your content moderation is a custom AI solution that can be trained on the needs of your specific platform and audience.

Want to build your own classifier in just minutes?

You get 1,000 classifications a month for free. Pay only when you scale. No credit card required.

Start for free Get a demo