How much time do we need to fix the internet (and social media)?

March 2, 2019
How much time do we need to fix the internet (and social media)?

Picture a rowdy Saturday night in a busy capital city. A man standing shouting in a square, inciting violence towards another group of people. The Police arrive after a couple of minutes and intervene. An all too common occurrence, but nothing extraordinary.

As a society, we know how to deal with such situations in the real world. We have legislation that provides boundaries for everyone’s actions. We have given law enforcement authorities the power to control unruly behaviours.

In the digital era things have become more complicated. A digital town square can be located anywhere on the planet, anywhere in the internet where people are spending their time. It is sometimes unclear which country’s legislation applies to each issue and location. It also may be unclear if the owner is or is not capable of controlling the area.

Some of these modern city squares have better supervision than others. For example, a traditional news media outlet with the strong Editor-In-Chief will usually react in time if such shouting takes place on their site. The only challenge is to detect the shouting.

Yet there are widely used services which do not intervene particularly well during ongoing altercations. Usually their head office is located in a different country altogether, and their employees tasked with moderating the groups do not necessarily even understand the foreign language concerned.

In extreme cases, this has led a government to temporarily shut down all social media services, as happened in Sri Lanka last month.

In truth, there is no need to close services or discussions. There is a digital tool that understands any language or dialect in the world and can be transparently used in moderating any type of discussion anywhere on the internet. It’s called Utopia AI moderator.

It’s good news not only for the safety of society, but also for freedom of speech and for democracy.

Society need the trusted ones

Publicly inciting or attempting to incite violence against other people is an uncontroversial offence. It is unacceptable everywhere in the civilised world. A service owner or Editor-In-Chief needs only to supervise and detect such action and take it down immediately. Unfortunately, supervision may not be easy because the number of messages is huge.

A modern AI tool, like Utopia AI Moderator, is a great help here. It can be used as an intelligent monitoring mechanism. Whoever runs the platform defines the rules and the moderation policy. The machine learns and can be the controller for the “city square”. It will keep the square calm and call a human moderator to help if there is something it does not understand. The machine never sleeps and is never biased.

Not all cases are as simple as this previous example. Publishing a picture of a religious icon or a swimsuit, for example, may be OK in some areas but a total no-go in certain places and cultures.

This is not a problem that machines can fix themselves. Utopia AI Moderator is trained by human decisions. Every language and platform gets a unique model. That model can fully understand culture-related special features. The only requirement is that the people who provide the training examples to the AI understand those nuances.

It’s often said that in the wrong hands, AI tools could be used against people and democracy. Utopia provides tools only for the people who are trusted to follow the laws and rules set by democratically chosen politicians and society. In order to provide a common understanding, Utopia’s contracts declare the agreement is terminated with immediate effect if one party breaches UN Declaration of Human Rights.

Tech is ready – time for businesses is now

The internet and the new digital networking platforms have been growing so fast, the real world has struggled to keep up. Social media services connect people globally. Borders are not what they used to be. If malicious content can’t be stopped in time, it can cause great harm.

Technology is now ready to deal with the moderation and curation of an exponentially increased amount of user-generated content. Now it is time for platforms, services and businesses to adopt this new technology.

Building a unique AI model with Utopia AI Moderator only takes two weeks. It is infinitely scalable, no matter which language, dialect or moderation policy one wants to adopt. The model is kept up-to-date as world, language and policies change.

Our approach is based on statistical modelling and computational linguistics, not Natural Language Processing, NLP. It was chosen because our practitioners of AI started with Finnish, one of the most complex languages in the world, which is extremely hard for NLP to solve.

Utopian AI scientists discovered their approach is superior to others in understanding social media language.

Sinhalese, Tamil, German, English, Hindi, Chinese, Arabic…any language or dialect, our machine learns to understand not only the text but the context too.

Digital city squares will once again be safe places to be.

Want to learn more?

Check out our case studies or contact us if you have questions or want a demo.

Message sent! Thank you.

An error has occurred somewhere and it is not possible to submit the form. Please try again later or contact us via email.

You may also like

No items found.

Let’s meet! Book a free 30-minute consultation with our AI expert.

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.