Moderation is a moving target, and the policies keep changing, so this week we want to explore another popular subject.
How has moderation evolved over the years?
Nowadays, there are so many different platforms, games, apps and other services that all offer consumers different ways to communicate – and all of that content needs to be assessed.
Many online services use very basic word filters and rules on their platforms. In the past, this caught some of the bad content, but all the user needed to do to bypass these filters and confuse the system was misspelling words. With the introduction of mobile chat apps, emojis, other languages, upside-down text, other unicode characters and voice to text, these challenges have been multiplied tenfold. On top of that, the word-based filters are not even close in understanding entire sentences or messages, as they lack contextual understanding. It’s pretty clear that tools that were built years ago are not prepared to deal with modern online communities.
The ongoing backlash we’ve seen against social media platforms has exposed how the old-fashioned approaches to moderation needs to be re-thought. Many platforms use third party moderators that take a one generic moderation policy fits all approach to moderation. There are far better solutions.
But in terms of the practical challenges, scale has become an absolute game-changer. Facebook is home to nearly a third of the world’s population, in the gaming sphere Minecraft has 600 million player accounts and Fortnite has 350 million registered players.
In the past, tackling this volume of data and chat logs simply wasn’t a concern, but the sheer scale platforms have to facilitate can quickly become unmanageable. The pandemic has only acted as a catalyst, player numbers for some games rose by as much as a third, drastically shifting the moderation challenge overnight.