Dating apps: Trust, safety and security best practices
CX Best PracticesTechnology
Dating has gone digital — customers are swiping right for online dating platforms that make them feel safe throughout the entire experience. That means it's up to the companies behind the platforms to maintain trust and protect customer privacy. Those who take this to heart will have users falling 'head over heels' for their brand.
Online dating existed long before mobile apps arrived on the scene, but it was Tinder's 2012 launch that led to explosive growth in the market. There are now over 1,500 different dating apps and websites from which to choose, and more arriving on the scene seemingly every day. According to recent estimates from Business of Apps, usage of dating apps surged from 185 million users in 2015 to 270 million in 2020, and revenue grew to $3.1 billion in 2020 — nearly double the amount from just five years prior. And there's no sign of things slowing down.
All dating app experiences begin with strict user guidelines on what is, and isn't, appropriate or acceptable while using the technology. These rules of engagement are constantly updated, and can be enhanced through partnerships with companies that have a rich understanding of customer experience (CX), technology and trust and safety. Dynamic partners can help these companies to develop and implement in-app technology like chatbots and AI-driven content moderation tools to help keep users safe.
These features are intrinsic to an app's security and its perception in the marketplace. Here's a look at some of the best practices in the industry.
Start with profile verification
One of the best ways to keep dating app users safe is to prevent those with ill-intent from entering online communities in the first place.
Dating app brands are becoming more and more vigilant on this front, vetting all profiles and verifying their veracity before allowing users access into the community. This gatekeeping approach helps to deny entry to scammers and others who set out to exploit the apps and their users.
However, even with a robust profile verification system in place, there's still a need for security features that can address the noncompliant users who slip through the cracks. That's why it's important to empower app users with reactive security features, like report functions. When a user flags a suspicious or malicious profile, human content moderators will conduct an investigation into the profile in an effort to keep the community safe.
Safeguard the experience with block and report features
For those who progress beyond browsing profiles onto messaging, block and report features remain essential throughout the entire dating journey. There are two key tenets here.
- Users should be able to flag when a message they've received goes against community guidelines, thereby prompting further investigation by company behind the app. At the conclusion of the investigation, the support team should provide an update to the user who made the initial report, helping to instill confidence and trust in the platform.
- Users should be able to bar another user from messaging them so as to deny them the ability to send unwanted content. Consent is key, and that applies to messaging too.
In addition to these self-reporting functions, other features are being introduced that make use of artificial intelligence (AI) to prevent noncompliant behavior and to mitigate its effects.
Content moderation best practices: Maintaining brand trust in a digital world
Learn the importance of content moderation, how to choose the right method and the various ways to measure your success.
Implement intelligent tech
To earn trust, effort must be put into preventing the sending of damaging messages in the first place. Take for example, Tinder's Are You Sure? feature, which prompts users to think twice before they hit send on a new message. The feature uses AI to detect drafted content that could be harmful or noncompliant and asks the would-be sender to pause and confirm if they'd like to continue with the send. After the initial rollout and beta testing, Tinder says its algorithms detected a 10% drop in inappropriate messages among those users.
Another use of intelligent technology comes in the form of the Screenshot Block feature from the popular dating app, Badoo. For Android users, the feature prevents private conversations and photos from being captured, saved and shared, with the goal of increasing privacy for the app's users. Apple users attempting to take a screenshot will receive an automated warning message.
If, on the other hand, a user decides to willfully break guidelines and send something inappropriate, brands can also implement AI-backed tools that automatically detect and subsequently block or blur unwanted images. In-app safety features like these confidently signal to users that trust is a community non-negotiable. And as the saying goes — relationships are built on trust.
Staying safe — online and offline
Let's not forget that for many users, the endgame is to go on engaging dates. That means at some point the experience transitions from in-app to in-person. With this in mind, leading brands are taking steps to protect users who make the leap towards meeting in-person.
One example is Tinder's partnership with Noonlight. The platform's Timeline feature triggers emergency services if users feel unsafe, uneasy or need help while on a date. Users save information about the person they're meeting and the location of the date beforehand — then, during the date, they can discreetly hit the "panic button" in the app and emergency services in the users' area are alerted immediately.
Dating, online or otherwise, should be a pleasant experience for all parties. But for there to be fun, there needs to be trust. For dating apps to earn and maintain user trust, brands must keep their customers safe and secure at every turn — from left swipe to right.