AI moderation and freedom of expression
Imagine there was an IT company that had access to all user generated content on the internet. Then imagine that company would utilise the most fashionable AI to moderate all the content before it goes live.
This scenario might seem a bit problematic to many, and for good reason.
Modern machine learning based AI tools are so powerful they learn to mimic human behaviour in decision-making. Decisions of such AI moderation systems rely heavily on training data. If the training data directs AI to root out, for example, all the voices criticising governments, those voices will go silent. The vital question is how humans define the moderation policy.
Utopia Analytics provides real-time moderation service for numerous heavy-weight online platforms and societies on the planet. Utopia AI is based purely on cutting-edge machine learning. Utopia’s text analytics products can understand the semantic meaning in any language in the world.
In this light, it’s easy to understand why Utopia has chosen never to define the moderation policy for any of its customers’ online services. That is and will be the customers’ privilege: Every company defines themselves how users should behave on their platform. Of course, the companies need to follow the local legislation, aligning with international law. Ultimately, it is up to the local court to rule what type of content is acceptable and what is not.
If there is any doubt that a Utopia’s customer or potential customer would violate human rights on the moderation, Utopia will not provide the moderation service. Our contracts say that if a party breaches Universal Declaration of Human Rights, cooperation will be terminated.
We believe it is crucial for earthlings to have the internet with many different views and diversity of voices as long as the messaging is not hurting anybody. We also believe that not a single person or company should be granted the power to globally decide what is acceptable to say and what is not.