Taimi Reduces Content Moderation Time by 3x, Experiences 10x Cost Reduction With Nyckel
Taimi is an LGBTQ+ dating app, with over 16 million users on the platform. With a community as large as Taimi’s, content moderation is critical to ensure a positive user experience.
- Taimi uses Nyckel text classification to moderate content across its platforms
- 3x reduction in content moderation time
- 4x increase in auto-moderation coverage
- 10x cost reduction compared to manual curation
- 96% accuracy for auto-moderation
Taimi is a dating app for lesbian, gay, bi, trans, queer, and others who don’t identify as cisgender and heterosexual. The company built the app around the concept of dating fluidity, and users choose it because it offers them a queer-centric bio, authentic connections, a place to be themselves, and a strong commitment to harassment prevention.
Taimi’s moderation team was up against a lot: they had thousands of images and pieces of text per day to moderate. At the same time, they needed to optimize their moderation tools.
Taimi needed a content moderation solution that was faster and higher quality than keyword-based moderation and would take into account the context of the content within the platform. The team recognized they needed machine learning (ML) to support their complex moderation needs.
Taimi’s Moderation Manager, Vladislav Yavorskyi, discovered Nyckel’s content curation solutions via an internet search for ML and AI solutions. Vladislav was immediately pleased with the simplicity and usability of Nyckel’s interface, and the ready-to-use free plan meant that Vladislav’s team could get started right away.
Taimi’s moderation isn’t as simple as classifying content as acceptable or unacceptable. The team uses more than 10 categories during the moderation process, including contextual considerations. For example, the team weighs whether the content contains external links or social media.
Taimi’s first approach with Nyckel was to use a single model across half a dozen moderation categories. This was successful, but Vladislav’s team needed the model to auto-moderate more content, and with greater accuracy. They decided to train individual models for each of their categories. The next step was to train the first of these stripped-down models.
The first of the new models that the team tested were able to moderate 45% of Taimi’s content with 95% accuracy. And with every training cycle, the numbers improved. The team moved forward with implementing these models across each category.
Today, most of Taimi’s content is moderated automatically. The moderation time — the time from when the content goes live to when the content has been moderated — has been reduced to a few seconds. And to monitor the model’s continued performance as user behavior changes, the moderation team checks a subsample of auto-moderated content.
Using Nyckel’s Text Classification API, Taimi now automates 60% of all content moderation with an accuracy of 96%. Average moderation time is down 3x and auto-moderation coverage is up 4x. Apart from continuing to improve the current model, Vladislav plans to apply Nyckel’s machine learning to the problem of scammers and spammers in the future.
For more reading on content moderation, check out these articles: