Weekly Gaming Q&A Series – Week 1: What are the different kinds of moderation?

November 4, 2021
Weekly Gaming Q&A Series – Week 1: What are the different kinds of moderation?

Content moderation is becoming a hotly discussed subject around the world, and Frances Haugen’s climactic appearance in the UK Parliament is only fuelling the debate further. Platforms are facing mounting questions around moderation practices and their users’ safety.

So, each week we’ll tackle some of our most frequently asked questions around content moderation to shed some light on the matter.

What are the different kinds of moderation?

Moderation has traditionally been done by using human moderation teams to manually moderate content.

Aside from being slow and outdated, this approach is prone to bias, less effective at scale and can be very costly – and not only in a monetary sense, as the detrimental effects on moderators’ mental health have been well documented. With big volumes, manual moderation is impossible.

If we look past the recent allegations around its moderation policies and practices for a moment. Previously, Mark Zuckerberg admitted that the 15,000 moderators on the company’s payroll could be making the wrong decision 10% of the time. Across billions of Facebook users, that’s a lot of errors.

The limitations of human moderation prompted another wave of software innovation to pick up the slack.

However, most of these systems were built around filtering content using rules and extensive lists of banned words. Really, they are extensive dictionaries of words and phrases which need to be regularly and manually updated, essentially just trying to stay one step ahead of users trying to find workarounds – like simply swapping or misspelling words to confuse the filter.

The current cutting-edge in moderation is context-aware advanced AI, which is trained and built based on community’s dynamics as their expert human moderators understand it. So the system is bespoke to each community it’s deployed in and better able to deal with quirks or slang of a specific user group. Advanced AI is much more difficult to fool, as the system analyses messages and entire discussions in any language, meaning it can moderate intent as well as just recognising words and phrases and semantic meaning isn’t lost in translation.

Looking ahead, the next technological milestone for improving online safety will undoubtedly be voice chat. In voice chat, the speech needs to be delivered form party to another in less than a second. Otherwise, the delay will interrupt the discussion. Moderation can be done only after the sentence close to an end but at that stage the recipient has already heard the message, no matter how beautiful or toxic it was.

You may also like

No items found.

Book a free 30-minute consultation with one of our AI experts

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.