Facebook refused help to end hate speech in Sri Lanka
Finnish online moderation firm shows bosses put protection of business before safety of people
18 June 2019
A company that uses artificial intelligence to moderate online hate comments within milliseconds offered its help to Facebook in Sri Lanka but was turned away, a UK House of Commons committee on Disinformation will be told today.
Utopia Analytics, the world leaders in moderating online comment, will tell the social media site in a series of emails shown to the committee that hate speech was an avoidable problem that they could help eradicate. In rejecting the offer of help it appears that Facebook believed that protecting their business was more important than preventing damage done to peoples’ lives, via their platform.
The hate speech that was put online in the run up to the atrocities that killed 253 people on 21 April 2019 has been cited as a contributing factor to the attacks.
Mari-Sanna Paukkeri, CEO of Utopia Analytics, said:
“In March 2018 we showed Facebook that we could get rid of the majority of the hate speech from their site within milliseconds of it appearing. Facebook have repeatedly claimed that this technology does not exist but despite what they may say, we have been using it successfully for over 3 years in many countries and with many businesses.”
According to Paukkeri, it seems Facebook chose their business over safety:
“It is a shame that Facebook decided that their internal considerations were more important than getting rid of the inflammatory rhetoric that was posted on their site. On the other hand, the dangerous and hate-filled language used was said to have been a contributing factor to the attacks so we will never know if taking it down would have made a difference.”
You may also like