What to do about moderation in Meta’s metaverse?

March 16, 2022
What to do about moderation in Meta’s metaverse?

One of the things fuelling the current hype around the idea of the metaverse (and it is just an idea; a functioning metaverse is years away from being possible) is that there is no single definition of what the metaverse actually is. Many envision it as an exciting virtual reality (VR) world filled with endless possibilities akin to Ready Player One – but right now, pretty much any 3D world where there is some level of personalisation, interaction and asset ownership is making claim to being a precursor of the metaverse.

Despite Mark Zuckerberg’s recent declaration that “open standards, privacy and safety need to be built into the metaverse from day one” (from the meme-worthy promotional video for his new vision) – the current reality is closer to the anything-goes chat rooms of the early web.

There’s an array of technical challenges that still need to be solved before a working metaverse will ever be stitched together. But first and foremost, companies like Meta need to create environments that are safe and enjoyable for users, and moderation is vital to making this a reality.

Moderating in virtual reality

VR technology is key to Meta’s vision, having invested billions in developing its own VR ecosystem through the acquisition of Oculus Rift. But VR comes with its own moderation challenges, as players use voice chat and gestures – which are difficult to moderate, meaning what we consider to be unsafe or toxic behaviour becomes harder to filter. Therefore, new approaches and technologies will need to be developed if future VR worlds are to be safe spaces for players.

Early beta tests from Meta’s flagship metaverse project, Horizon Worlds, has already seen a multitude of reports of users experiencing sexual harassment and violence. The Centre for Countering Digital Hate found that during an 11.5-hour play session, there were over 100 instances of sexual harassment, racism or explicit content – that’s about one every 7 minutes.

The metaverse is built on the idea of pushing the limits of traditional gaming to bring players new possibilities. But with so much scope for users to play, work, create and everything in between, naturally, there will be people looking to exploit others. Safeguarding measures cannot be left to passers-by, as is the case with user-based reporting that is central to the moderation used by the likes of YouTube or Twitch. There must be a similar emergency number in these virtual worlds that can alert an authority to immediately intervene in the situation.

There’s a litany of other measures Meta could implement that could immediately make a difference. For example, adding a teleport function that allows users to completely disappear from view and move elsewhere if someone begins to misbehave is one way of dealing with issues around proximity. Equally, automatically tracking the actions of avatars and banning users who violate the standards or engage in abusive behaviours should be possible.

Many of the safety concerns are entirely solvable with technology that’s readily available today. That being said, the moderation puzzle isn’t only about prevention. Platforms must carefully consider how they penalise abusive players and introduce more substantive punishments for the worst offenders – rather than just banning an account, for them to simply create another under a new alias.

Anonymity is often cited as one of the main causes of online toxicity, de-anonymising accounts would bring a new level of accountability to online users, and ensure toxic and abusive users can’t hide behind a username. Implementing existing technology, like facial recognition or fingerprint authentication, would add another layer of security.

Reducing anonymity would also mean children can’t create accounts to access spaces only intended for adults, and also means platforms don’t have to dilute their offering to cater for such broad demographics.

Avoiding moderation mistakes in the future

Standards of moderation vary hugely across the industry at the moment. But our own research has shown that 70% of gamers have experienced toxicity, so clearly, more can be done. Roblox, another metaverse frontrunner, has a staggering 2 billion player accounts and is largely aimed at children, but the game has been making headlines with a slew of disturbing user content that’s regularly featured in the game.

If the metaverse is going to move beyond the hype and manifest as the next big gaming revolution, the entire industry needs to take a safety-first approach to solve these challenges to avoid the mistakes of the past and remove toxic behaviours that have become completely ingrained in gaming culture. The key lies in being able to effectively moderate all of these different ways to communicate – chat, voice, gestures, user content, at a huge scale.

It still remains to be seen if Meta’s new safety-first posturing has substance, or if it’s just trying to alter perceptions after a litany of safety concerns were unearthed by whistleblower Francis Haugen. But with a seemingly endless series of high profile controversies, any new venture the company undertakes will no doubt be fraught with scrutiny – so, Meta’s journey to this new digital world will no doubt be a bumpy ride.

     

You may also like

No items found.

Book a free 30-minute consultation with one of our AI experts

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.