September 1, 2021
Make catfishing hard again
Ly Hoang, Marketing Manager
Two months ago, I received a late-night call from a friend in serious distress. She was being blackmailed.
It had all started a week earlier. On a dating app, my friend started talking with this decent-looking expat who worked in the city. They got along well and agreed to meet up. The guy didn’t, however, show up for the first date. Nor did he show up for the following two. Despite his convincing explanations, she was almost certain by then that he was a catfish.
Her theory was confirmed later that day. As she was browsing the dating app, she came across two other profiles using this man’s exact same pictures, but with different names, ages, locations and bios. She immediately blocked him on all social media.
Unfortunately, it wasn’t the end of the story.
The next evening, my friend received a message from her catfish under a cloned social media account.
He had collected all of her family and friends’ contact information and threatened to forward all of her sensitive photos and videos if she didn’t do what he wanted.
She called the police, but they couldn’t help because the situation was deemed “not urgent.” She then proceeded to contact the dating app helpline. The helpline couldn’t do anything unless there was an official warrant to reveal the real location of the account. The police declined to issue the warrant.
The next morning, the blackmailing account disappeared, together with the man’s dating app account and all the duplicate ones. Suddenly, everything went completely silent. No new messages, nothing.
She was only starting to recover from weeks of anxiety, sleeplessness and stress when her phone exploded with messages. Five weeks after the threat, the blackmailer finally did what he said he would do. He sent the photos and videos to her family and relatives, along with a new threat that this was not the end of it.
And everything went silent again.
Despite the mental support she’s currently receiving, my friend was sent into depression by this traumatic experience, and lives in fear of not knowing when her tormentor will strike again. Even with all the evidence gathered, the police still did not open a case, despite knowing that he had targeted other people the same way.
What can dating apps do?
It has become increasingly common for people to have friends who use dating apps. Online dating anecdotes can be entertaining, and sometimes they can mark the start of lifelong relationships. Stories such as my friend’s, however, have become all-too-common.
Around 10% of all online dating profiles can be considered fake. According to the Online Dating Dangerous Liaisons report, 58% of dating app users are concerned about getting catfished, and 62% worry about potential physical and psychological violence and abuse.
Scammers, on top of posing a serious reputation damage to dating apps, also create a big barrier to people who would like to use such services. In a report by Kaspersky, 38% of respondents have never used dating apps because they are afraid of being scammed. Of all the different types of scams on dating apps, users most often encountered catfishing (51%), malicious links or attachments (21%) or found that their identity had been stolen (17%).
With catfishing being such a dominant factor getting in the way of an enjoyable and safe dating experience for users, preventing it should be of the utmost importance to dating apps.
In fact, my friend’s incident might have been effectively prevented from the start, if this dating app had simply possessed a faster and more accurate screening process for duplicate profiles.
One solution is to automate the whole moderation process by implementing Advanced AI.
Developed by PhD-level experts in AI, text analytics and computational linguistics, Utopia AI Moderator is the market-leading solution for tackling content that violates community guidelines, and for targeting potential catfishing-related activities in real time. Utopia’s tools offer a duplicate-detection feature that discovers identical profiles and suspicious patterns in behaviour. They also analyze and describe what is off-kilter or dubious about the content, and this information can be sent to users automatically.
In addition, other equally alarming concerns – such as inappropriate behaviour, hate speech, discrimination, sexual harassment, and scams – are monitored and handled by Utopia AI around the clock. This will lift a lot of time-sensitive responsibilities from human content moderators, while enhancing the users’ dating experience by creating a much safer environment.