Ethical AI part 6: No prejudice

October 23, 2020
Ethical AI part 6: No prejudice

Our Utopia AI models moderate each message or comment solely by the content and context, no matter who created it.

Utopia has been asked to build AI moderation models that would prefer certain gender’s comments “because their comments are better”. Also, Utopia has been asked to build AI moderation models that would judge a writer’s fresh comment based on one’s earlier bad behaviour.

Business-wise, both wishes are understandable. The higher the traffic, the bigger the number of impressions and clicks for the ads. The worse the writer’s reputation, the higher the risk of unacceptable content.

Unbiased chat message and news comment moderation is important to Utopia. As a text analytics company Utopia is committed to the United Nations’ Universal Declaration of Human Rights which disallows any type of discrimination and guarantees everyone’s freedom of speech.

Of course, every company and online service provider has the right and responsibility to decide what kind of comments are accepted on their online service and society. But with Utopia AI onboard, the publishing decision must be done respecting freedom of speech, and without prejudice.

Traditional tools for moderation do not understand the semantic meaning in the text, thus user modelling, i.e. the user’s past behaviour, is one way to increase the quality of such moderation tool. In contrast, Utopia AI is so powerful that it moderates each message or comment solely by the content and context, no matter who created it. Utopia is not willing to build AI models that do user modelling or enable prejudiced or discriminating moderation for social media communication.

You may also like

No items found.

Book a free 30-minute consultation with one of our AI experts

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.