Ethical AI Part 9: Safety
We build tools to maintain security and wellbeing in the digital world.
In the physical world, many of us have the privilege of taking safety for granted. We assume the existence of a security structure that ensures everything runs smoothly. If we attend a music festival, there are rules and instructions for everyone to follow, and on top of that, people who make sure the rules aren’t being broken.
We need to achieve a similar level of safety in the digital world. We should be able to participate in online gatherings and discussions without fear of being insulted, humiliated, robbed or attacked.
Many digital services nowadays have huge numbers of users. Whether they’re buying or selling, sharing opinions or swapping images, playing or just chatting, traffic in many services is so high that watching after the community’s wellbeing is labor-intensive. It is, in fact, an impossible task to be handled manually. An urgent need exists for advanced digital tools to maintain each service’s security, both for the brand and for the users.
Utopia’s purpose is to build tools that maintain security and wellbeing in the digital world, and that free humans from mundane tasks, allowing us to focus on the things that really require the attention of our human brains. From online marketplaces to news site comment sections to social media services and dating platforms, the service provider must now decide on its own community guidelines for acceptable actions. Utopia AI will learn that policy, and then uses this knowledge to help protect users as well as brands.
Naturally, the policy needs to be uniform and must take account of people’s rights. Utopia is not agreeing to build any AI models with prejudice. Utopia AI analyses all items solely according to content and context, no matter who wrote it. Utopia’s powerful tools have a solid base.
You may also like