The term “content moderation” is self-explanatory. To quickly define, it’s the act of sorting out inappropriate online distributions from users by website admins. Since the boom of the Internet in early 2000s, content moderation has become a crucial part to keep online platforms safe and healthy.
To avoid censorship, content moderation personnels should be a team of responsible individuals to make as objective decisions as possible.
The moderation work has always been carried out by a team of content moderators (voluntarily or paid), but in recent years, some of the biggest media houses worldwide have turned to AI. The reason behind this is due to an increasingly large amount of user-generated content on these sites, making it impossible for its human moderators to handle. Instead of continuously expanding the moderator team, which is very costly, these media houses needed another solution, which AI can offer.
Moderation with AI has some advantages that immediately catch businesses’ attention. It can handle massive volume of content while maintain a stable quality. More importantly, it works 24/7 and is also autonomous when humans are unavailable because of health reasons or holidays.
However, can AI be trusted to moderate content produced by very complex and metaphoric human beings?
Yes, it can. But businesses first need to do a research on which AI tool to pick.
1. First and foremost, it needs to be emphasised that AI is rocket science.
Naturally, no one would trust a diagnosis conducted by a person with no medical experience or education. It’s the same thing with building AI-based tools which needs an immense amount of education, experience and brain work. Hence, there’re only a few in the world who could make it work. Many businesses claim to be using AI moderation, but in fact, it’s just traditional rules (not AI), and many other’s AI is not of the quality that it works in practice.
As a result, the first question for the research should be: is the company providing you with the AI moderation solution an expert in AI?
Typically, a lay man AI moderation provider using traditional approaches asks for a large amount of manual work from its customers such as rules definitions and word lists. Even so, it takes several months with the traditional approach to build a new moderation tool, which might still not be efficient enough.
A quality AI learns from data and it only takes few weeks to have it in production, and it doesn’t require constant interventions from human.
2. Businesses need to examine its own moderation data.
True AI learns the moderation policy from previous human decisions, so in order for it to perform effectively, on top of a high-quality algorithm built by the provider, the business should have a consistent set of training data, including the followings:
- A good time length of data: The longer the timeframe of the data, the more content for AI to learn to enhance its performance. The ideal length depends on each case, which will be decided by the AI moderation provider.
- A well-supplied set of improper contents: Store all your improper contents to later be used as training data for AI.
- Context: Whether it is the earlier comment, a news article, a category or other information about where the comment has been submitted, it is an important information for a quality AI to learn, in which context you can say what.
An often-heard myth is that AI cannot be trusted because it cannot understand the whole context of the content. This again depends on how well-built the AI algorithms are. In reality, businesses can check how algorithms perform by looking at the results during a testing period.
In fact, a few AI moderation tools have proven to understand context and meaning and have an accuracy rate even better than humans. For example, Utopia AI Moderator does not only understand the context, but also understand its meaning despite misspellings or social media slangs. Most importantly, the tool works with all languages in the world, including, for example, French in Canada, Brazilian Portuguese, kids’ Polish, Singaporean English, and all other minor and major dialects of each language.
3. Finally, one large advantage that sets AI moderation apart from human moderation is the measurements.
Typically, human moderators can’t easily provide statistics about the users, their behaviour, or human moderation effectivity. They will most likely inform what they instantly remember in their own perspective.
Meanwhile, AI moderation decisions are available anytime for measuring purposes. It provides real-time data that can be visualized in charts, tables and numbers. As a result, AI provides businesses with better knowledge of what’s going on with its moderation process as well as the big picture of its user behavior.
4. Of course, there are limitations to AI moderation.
In the end, AI is not human, it can only do what it’s trained for. For instance, if during a content moderation process, an AI moderation tool comes across a comment saying “I need help, call the police”, it wouldn’t call the police unless it has specifically been trained to do so. That’s why AI moderation tools work alongside humans, to empower humans – not replacing them.
AI can’t understand sarcasm, but not all humans can. In reality, human moderators often treat sarcasm like they didn’t understand it, because many of the readers also didn’t. If the content violates the site’s publishing policy, it will be removed. Quality AI learns to behave the same way, so it really doesn’t matter if it could comprehend the sarcasm or not.
On another note, there are rising concerns about AI’s biased decisions, which is one of the main reasons why AI isn’t yet trusted. However, AI biased behavior can easily be prevented by skilled professional data scientists. In fact, it’s one of their daily duties to carry out algorithms and data choices and parametrizations to ensure that the AI is as objective and neutral as possible.