Deception and detection: Generative AI and fraud
"Two things are beyond contestation," asserts Jeff Puritt, president and chief executive officer at TELUS International in a recent FastCompany article about generative AI (GenAI).
"First, there are real, powerful GenAI applications for digital customer experience," Puritt explains. "Second, just as human creativity is seemingly limitless, so is humanity's ability to circumvent rules and restrictions. With each great leap forward in ingenuity, so too comes a great leap forward in potential for harm — and GenAI is no exception."
That duality comes into focus within the context of fraud.
On one hand, fraudsters can use generative AI to create convincing, deceptive content across modes with relative ease; one-third (33%) of cybersecurity experts identified "an increase in the volume and velocity of attacks" as a top threat related to generative AI in a Deep Instinct survey. On the other hand, brands can use generative AI to augment and enhance their holistic fraud prevention and detection strategies.
Given the emergence of generative AI, it is not a leap to suggest that brands must adapt to prevent and detect fraud today — and with customers in the middle, the stakes are high.
Building trust in generative AI
Brands are eager to reap the benefits of generative AI (GenAI) while limiting potential risks. Join TELUS International’s Steve Nemzer, director of AI growth and innovation, as he shares best practices for leveraging GenAI without compromising your organization’s goodwill.
Generative AI is changing the fraud landscape
There are two, broad concerns regarding GenAI and fraud. First, GenAI makes bad actors more efficient, enabling them to engage in a greater volume of fraudulent activity. Second, GenAI makes bad actors more convincing, increasing their chances of successfully defrauding an individual or brand. These two concerns are common among the following forms of fraud, which have all evolved with the use of GenAI.
Social engineering techniques are enhanced by GenAI
Generative AI applications are known to create convincing content — text, images, audio, video, code, data — you name it. Social engineering techniques are used by fraudsters to manipulate unsuspecting victims into sharing personal or financial information. Taken together, it is not difficult to see how the combination of GenAI and social engineering techniques presents a troubling prospect.
While there may have been a day when a bad actor would need to take the time to write a script to perpetuate fraud, that day is gone. Today, there are GenAI models like WormGPT, purpose-built to help users quickly generate convincing scripts and messages for the purposes of phishing. From that point, fraudsters can use automation to personalize the scripts with stolen information and make phishing attempts at scale. Convincing messages aren't limited to the written word, either: AI-generated deepfakes can deliver spoken messages that sound indistinguishable from voices recipients know and trust.
Meanwhile, triangulation fraud, which is common in ecommerce, has also become easier to execute with GenAI. In triangulation fraud, bad actors set up a fake ecommerce storefront in an attempt to get visitors to make purchases, entering their personal and financial information in the process. The fraudsters then make the actual legitimate purchase with the unsuspecting victim's information to evade suspicion, before either selling the compromised information, making additional purchases or using the information to commit more fraud. While this technique isn't new, with GenAI bad actors can generate text, images and video to make fake websites that look more credible in a fraction of the time that it would have taken before.
Access to synthetic data facilitates synthetic identity fraud
Synthetic identity fraud, or synthetic account fraud, is a technique deployed by fraudsters that combines real, stolen data with fake data to present seemingly complete and authentic customer profiles. From there, if they pass detection, these profiles are used to apply for loans, credit cards and to perpetuate further fraud. Although it's not a new tactic, algorithmically-generated data — called synthetic data — has reduced the amount of effort bad actors must put into it.
By leveraging GenAI, bad actors can create more of these synthetic identities and go farther in making them seem legitimate. For example, GenAI can be used to create a steady stream of social media posts for any number of synthetic identities. Bad actors are even using modern AI to forge important documents like driver's licenses and passports in an attempt to pass important identity verification checks.
According to Deduce, cases of synthetic data fraud have risen by 17% over the last two years, with generative AI exerting a growing influence. Meanwhile, Deloitte has labeled synthetic identity fraud the "fastest growing form of financial crime in the United States."
Brands can and must adapt to detect and prevent fraud
While generative AI offers potential to those keen on deception, there's a similar potential for brands performing fraud detection and prevention. The technology, especially when it is embedded within a multifaceted trust and safety strategy, offers real reasons for optimism.
Deploy generative AI for fraud detection
To keep up with the breadth and depth of fraud threats, many businesses have incorporated machine learning algorithms into their defensive strategies. These algorithms have been trained to identify, and subsequently prevent or flag, instances of fraud. In the process, they have protected countless brands and their customers. And, with the arrival of generative AI, there is every reason to believe that these algorithms are going to get much more effective.
One reason for this is GenAI's aforementioned ability to produce synthetic data, which can be used to train more effective fraud detection algorithms. Historically, the datasets used to train these systems did not contain enough fraud data for optimal training. High-quality synthetic data solves that imbalance, offering a way to enlarge and enrich a training dataset. With bigger, better datasets, fraud detection algorithms can become more effective at pattern recognition over time, which means they will be increasingly capable of rooting out fraudulent activity before it damages your customer experience or brand reputation.
Generative AI will also aid trust and safety teams in conducting fraud investigations. Consider banking, for example. It is not uncommon for banks to use augmented intelligence to determine if a flagged transaction can be handled automatically or if it requires human judgment. That essential technology predates GenAI, and helps teams manage the volume of transactions calling for review so that they can apply their energy and expertise to where it is needed. But today, through the application of generative AI, there is potential for these automated systems to do even more in collaboration with their human counterparts. Now, generative AI can enhance fraud analytics, effortlessly analyzing, summarizing and communicating information in human language. This means GenAI can now tell a team member why a transaction warrants review, and alert trust and safety teams to trends and other critical information that might inform future tactical or strategic enhancements.
Maintain "humanity in the loop"
Alongside generative AI, the role of humans remains critical in fraud prevention and detection.
For leaders responsible for developing and honing detection models, a human-in-the-loop approach can help to ensure your model does not pick up the wrong patterns and exhibit bias. For example, if your training data has led your model to associate fraud with a particular area code, acting on that pattern could have repercussions for customers in the real world. It is for this reason that model validation takes on heightened importance.
In simple terms, model validation is a process in which humans evaluate the outputs of a machine learning model to determine if it is performing as expected. In a recent Forbes article, Michael Ringman, chief information officer at TELUS International, explained the importance of model validation and continuous monitoring. "Fine-tuning the generative model through testing and validation processes can identify and fix potential shortcomings or biases that could lead to hallucinatory outputs. By continually monitoring the model's performance and analyzing its generated content, we can detect and address any emerging patterns and promptly intervene and refine the model’s parameters and training processes."
Meanwhile, for those looking to establish or enhance a comprehensive fraud prevention and detection program, it's important not to underestimate the value of human judgment in an effective defensive strategy. While algorithms might excel at learning patterns, humans shine in their ability to understand context — and when edge cases arise, context is key. Trust and safety professionals are called upon to review cases flagged by algorithms in which there is a degree of uncertainty. Their involvement helps to minimize the risk of a machine learning model taking the wrong action, for example by falsely identifying fraud in some cases, or by overlooking it in others. The most complete defense against fraud involves humans and technology, which underlines what is to be gained from a knowledgeable partner with experience in AI Data Solutions, as well as scalable trust and safety programs.
As the fraud landscape changes, humans, and human judgment, will remain an important part of a holistic fraud prevention and detection strategy. It will be human collaboration between governments, businesses and researchers that evolves policies, guardrails and techniques that minimize the risk of fraud in a world empowered by generative AI.
Navigate the new fraud landscape with a trusted partner
Fraud is changing due to the emergence of generative AI. That is a reality brands must accept and adapt to in order to protect their reputations and customers.
The good news is that the same technology emboldening today's fraudsters has the potential to significantly improve the methods deployed by those looking to stop them. Now is the time to explore the ways generative AI can boost your defensive posture and bring out the best in your trust and safety team members.
If you're ready to rise to the challenge, the right partner can make all the difference. TELUS International's expertise spans fraud prevention and detection and artificial intelligence — reach out today.