30 March 2020
All of the major social media companies and their parent corporations have issued a joint statement on their COVID-19 response efforts. They have also invited other companies to join them as they work to keep their communities healthy and safe.
In the statement, the companies stressed their joint effort to combat fraud and misinformation about the virus, elevate authoritative content on their platforms, and share critical updates in coordination with the governments.
The Finnish text analytics company Utopia Analytics is aware of the struggle that social media giants now face with content moderation. This is why Utopia Analytics is offering its Utopia AI Moderator service to one of these giants at cost for as long as the crisis lasts. It takes two weeks for Utopia to build a unique production-ready AI model.
“Online traffic has increased with the crisis,” states Dr. Mari-Sanna Paukkeri, CEO of Utopia Analytics. “In a precarious situation, people want to communicate, and they have the time do so. We are aware of how big social media companies are struggling with content moderation right now. Therefore, we’re offering them help.”
COVID-19 or not, national and international reports show that online hate speech is a growing problem all over the world. For example, the Council on Foreign Relations has stated that, at their most extreme, rumors and invective disseminated online have contributed to violence ranging from lynching to ethnic cleansing.
The fact is that tech already exists to make the internet safer. Advanced, machine learning based moderation tools have been on the market for years. One of them is Utopia AI Moderator, which learns each online service’s unique moderation policy and is the only product that can analyze the meaning of text in any language of the world. It is able to detect hate speech, toxic content, or any other type of unwanted content before it gets published.
Utopia AI Moderator is used by newsrooms, social media services, discussion forums, and other online services worldwide. It moderates hundreds, even thousands of messages every second, in real time. Utopia’s statistics show that typically 18–25 percent of the news comments violate the online service’s terms, and therefore, should not be published. (The share of improper content depends, for instance, on the moderation policy.)
Think about a news site that receives 1 million news comments every month, where the rate of improper comments is 20 percent. This means that almost 7 000 comments need to be rejected daily. Finding the improper comments requires lots of human work, if done manually. During a peak hour of traffic, an improper comment that needs to be detected and rejected will easily appear every five seconds.
“Actually,” Paukkeri says, “humans quite easily grow tired while trying to understand what they read. In a tight schedule, you might not be able to give a second glance at a comment. However, since machines don’t have feelings, they always process the text in the same way. Think of a production line: we assume a milk bottle or a car is always the same quality, and making them on a production line is the only way to achieve that quality. The same applies for moderation work.”
In Finland, many businesses, including newsrooms, are using AI-based moderation tools. For example, Iltalehti.fi, owned by Alma Media, has increased their commenting and the time readers spend on the site by automating most of their moderation. As the “Land of Engineers,” Finland seems to be ahead of the curve on this. In a report based on a survey that covered 32 countries, the London School of Economics and Political Science study recently found that less than 40 per cent of news organisations have a dedicated AI strategy.
“Around the world,” Paukkeri says, “the media has been persistent in raising this issue over and over again. It’s been a difficult problem to solve, but there are solutions and those will be implemented ultimately.”
Tackling online hate speech takes more than responsible Nordic media leaders, especially when much of the world now communicates on social media, with nearly a third of the global population active on Facebook alone.
“AI is difficult to train and maintain,” Paukkeri notes. “Many of the products are brand new. Even though you might have a bad experience with certain tools that don’t perform well, there are also people skilled enough to build AI tools that really work. Ask about the experiences of the people and online services that are already using the tools, and be open-minded. Once you’re in production, you’ll see how your users start to learn what’s okay and what isn’t, since you give them the feedback in real time with an advanced AI tool.”
Paukkeri adds: “Mark Zuckerberg has said they need to wait 5 to 10 years until AI is ready. That’s not true.”
Utopia Analytics’ offer is being presented to all the major social media companies. A production-ready AI model can be built in two weeks, and Utopia Analytics will keep it up-to-date.