Content Moderation Pretrained Classifiers
Last year, Nyckel announced the launch of our pretrained classifiers. These let anyone launch their own machine learning model in just minutes.
Today we’re proud to announce our suite of content moderation pretrained classifiers.
With these out-of-the-box tools, you can moderate your images and text in real-time. The classifiers available include:
- NSFW (Image) - Identify if content is NSFW or not (across 5 categories).
- Pornography (Image) - Identify if content is NSFW or not (yes/no).
- Weapons (Image) - Identify if the image contains a gun or knife.
- Offensive Language (Text) - Identify if a given text is profane, offensive, or hate speech.
- Smoking Use (Image) - Identify if someone is smoking in the image.
- Alcohol Use (Image) - Identify if someone is drinking in the image.
- Image Quality (Image) - Check image quality (blurry, normal, too dark, too bright).
- Blurry Images (Image) - Check whether image is blurry or not.
These classifiers exist as public-facing pages that anyone can interact with. We also offer API access to the models for free.
Not-Safe-For-Work (NSFW) Images
Our NFSW identifier will classify whether an image is NSFW or not. It breaks it down by:
- NSFW (Porn) - Images containing porn or partial nudity.
- NSFW (Pornographic Drawings) - Illustrations / drawings of pornographic nature.
- SFW (Normal Image) - Images with nothing to worry about.
- SFW (Drawings) - Drawings or illustrations that are SFW.
- SFW (Mildly Suggestive) - Images that may be considered risque for some sites, such as kid-friendly ones, but which don't qualify as nudity or pornography.
This is a good classifier should you want a nuanced view of the NSFW/SFW spectrum. For example, kid-friendly sites may want to block images that are otherwise SFW but perhaps not suitable for children.
Pornography (Images)
Our porn identifier tells you if an image is pornographic or not. It’s a binary yes/no.
This is a good classifier if you just want a binary yes/no whether an image is pornographic.
Weapons Identifier (Images)
Our weapons identifier identifies whether there is a gun or knife in the image.
This is a useful tool to auto-block images that contain weapons in them (specifically, knives or guns).
Smoking Use Identifier (Image)
Our smoking use identifier categorizes whether an image contains someone smoking.
If images containing cigarettes or smoking use are not allowed on your platform, this tool will make it easy to flag such photos.
Alcohol Use Identifier (Image)
Our alcohol use classifier categorizes whether an image contains someone drinking.
If images of alcohol use or alcohol are not allowed on your platform, this tool will flag such offending photos.
Offensive Language (Text)
Our offensive language classifier will ingest text and tell you if it’s offensive or not. It specifically looks for text that is profane, offensive, or hate speech.
This is a good text classifier for easily flagging any content that could be considered offensive. It flags not just curse words or erotica, but hate speech as well.
Image Quality
Our image quality classifier tells you whether an image is blurry, normal, too dark, or too bright.
Maintaining high-quality user-generated images on your site can be tough. This tool will tell you if a given image is blurry or abnormally dark / bright, which you can use to auto-flag images as needed.
Blurry Images
Our blurry images classifier tells you whether an image is blurry or not (yes/no).
If you want an easy way to identify whether an image is blurry or not, this tool is for you.
Wrap Up
If you’re interested in API access to these pretrained classifiers, please sign-up for a free account and reach out to us at feedback@nyckel.com.