Weekly Gaming Q&A Series – Week 8: Why is human-based moderation so hard to get right?

January 7, 2022
Weekly Gaming Q&A Series – Week 8: Why is human-based moderation so hard to get right?

Humans often fall short when it comes to moderation because we come equipped with our own inherent biases. What might be offensive to one person is OK to another, even within the same organisation. Similarly, there’s a myriad of factors that lead to inconsistency in a moderators’ daily decision making, simple things like their mood on a given day can sway moderation decisions. Companies often give moderators huge parameters and rule books to prevent bias, but then we remove the thing we are great at – understanding context.

When moderation is purely done by humans, they’re expected to make thousands of decisions on toxic, and sometimes mentally scarring, content as consistently and quickly as an automated system. We’ve already covered the huge number of moderators employed by Facebook in the past. So simply throwing more people at the problem clearly isn’t a workable solution, not to mention the risk of exposing more people to disturbing content.

It’s important to note that no matter the kind of moderation a given company is using, humans should always be a vital part of the process. There’s a key distinction between ‘human-based’ and ‘human-powered’, one is automated where humans set the moderation policy and ethical guidelines, whereas the other means that humans are involved from start to finish. So the real question should be, when to involve humans in the process? Before the toxic content is present, during or after?

At Utopia, our approach to moderation involves humans as early in the process as possible, as they define the ethics of the AI system and therefore, what kind of content is and isn’t acceptable. Their unique insight and past moderation decisions help create an AI model that’s bespoke to a specific community, to protect users, moderators and the brand. During the training and setup phase, our AI moderator identifies past inconsistent moderation decisions, rooting out bias from the outset – while making sure that both the users and moderators aren’t exposed to toxic content.

You may also like

No items found.

Book a free 30-minute consultation with one of our AI experts

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.