What does artificial intelligence and machine learning have anything to do with payment security? Well, the Federal Trade Commission received around 426,000 cases of credit card fraud in 2023. While 5% down from the total in 2022, that’s still 53% higher than in 2019, indicating a troubling upward trend.
Is there any way to stop this rampant uptick in payment fraud? There have been interesting developments in identity verification, such as biometrics. Mastercard is now making cards that use thumbprints to verify transactions, for example.
But what about card-not-present (CNP) transactions, where the cardholder is not physically present during the transaction? And what about stolen ‘legacy cards’ (traditional, e.g., chip and contactless plastic cards), which consumers still widely use? These scenarios pose unique challenges for artificial intelligence and machine learning in fraud detection. However, these technologies are designed to adapt to changing fraud patterns. Adaptation gives businesses the confidence to stay one step ahead of fraudsters.
Machine learning and AI in eCommerce and even in-person transactions may be the key to thwarting fraud and criminal activity. These security technologies rely on analyzing massive amounts of data to essentially predict the future and call out suspicious transactions.
What is AI, and What is Machine Learning?
Machine learning (ML) is a subcategory of artificial intelligence (AI). ML is specifically concerned with analyzing patterns to draw conclusions and perform tasks without specific instructions. The history of artificial intelligence and machine learning arguably began with interest in studying the human mind. AI is essentially a replication of human processes.
However, true “machine learning” did not appear until the 1950s, when scientists at IBM developed (what we would today consider) primitive programs for analyzing the best moves to make during a game of checkers. Fast-forward to today, where different markets use AI for a lot more than hopping around a black-and-white checkerboard.
Unless you’re living under a rock (and maybe you should be, given everything we’ve seen over the last few years), you’ve probably seen that AI is “taking over” social media, the internet, and everything digital on our mobile devices. AI-generated artwork, music, and writing (excluding our informative, bespoke, hand-crafted blog) are shocking the world with their verisimilitude to real life and their uncanny ability to test the limits of imagination. Have you seen that AI version of Harry Potter as a mafioso movie?
These fascinating developments occur in the “expressive” space among private, individual consumers. Undoubtedly, governments around the world are working tirelessly to employ artificial intelligence to create robot dogs and drones to militarize local police forces (we’ll save that political conversation for your Thanksgiving dinner table).
But what are the applications for AI in the B2B and B2C spaces? We’ll get there, yes, indeed…but first, let’s take a slight detour. Let’s look at how criminals will use AI and machine learning to navigate the “payment landscape.”
It’s Alive… It’s ALIVE!
Mary Shelley’s Frankenstein is a cornerstone of Romantic literature. It explores the idea of revitalizing the dead into living beings through a little bit of patchwork. What’s that got to do with fraud detection and payment security? Everything.
Take the following story as an example. In 2020, Philadelphia attorney Gary Schildhorn took a phone call from his son. His son had gotten in a car accident while inebriated, injuring a pregnant woman. He would need 10% of the $90,000 to get out of prison. The son directed Gary to an attorney, who directed Gary to another attorney.
This attorney asked Gary to send the $9,000 to his “credit union” using a specific ATM. Something did not seem right to Gary, so he called his daughter-in-law. Turns out that Gary’s son was actually fine. None of this had happened at all.
What happened was that scammers used AI to mimic the voice of Gary’s son, manipulating him into a chain of events that could have easily cost him $9,000. The “credit union ATM” they wanted him to use was a crypto terminal, and if he had sent them that “bail money,” he would have never seen it again.
This story occurred in 2020, so it’s already old news. Since then, criminals have continued to perfect the craft of using AI and machine learning to make leaps and bounds in developing detailed forms of identity theft. How exactly does this work?
Everybody is posting videos and lives of themselves to their social media accounts (via things like Instagram reels, TikTok, and Facebook Live). A criminal can take a dozen of these videos, feed them through a program like ElevenLabs, and leverage their creation into a financial crime.
What’s that got to do with Frankenstein? These criminals need other pieces of information. They can glean this from what is publicly available on social media: a birthday here, a vague enough address there, a list of social connections…. suddenly they have a “Frankenstein identity.”
The Frightening Power of AI
These types of scams could not exist without using AI and machine learning tools. For instance, machine learning helps synthesize voice samples from 12 TikTok videos into a convincing call to your dad, grandma, or bank.
What’s frightening is that AI tools are now reaching a point of “democratized” accessibility whereby criminals can now create convincing videos (and place a more “convincing” video call).
Another frightening thing about AI and machine learning is that they can essentially teach themselves to acquire new skills far more rapidly than people can. They can get better and better at what they do, potentially outstripping consumers’ ability to keep abreast of the latest scams.
Fortunately, criminals are not the only ones using AI and machine learning algorithms. Increasingly, businesses are operating by drawing on large amounts of data to direct the flow of supply chains, make informed decisions, and prevent fraud.
Consulting firm PwC estimates that 46% of businesses have experienced some form of fraud over the past two years. Around 17% of organizations already use AI and machine learning to detect fraud and prevent data breaches. Of organizations that aren’t (yet), 26% plan on implementing AI and ML strategies within the next two years.
Rule-Based Versus Real-Time Paradigms
In the past, fraud prevention was a rule-based paradigm, somewhat “set in stone.” Consumers may have encountered this concept when traveling and having their purchase at a gas station stopped as a “fraudulent transaction.”
The “rule” or parameter in place would expect all in-person transitions to occur within a certain radius of their zip code. Certain types of purchases, such as gas stations, may have been black-listed as already potentially suspicious.
Two problems with the rule-based paradigm are that (1) it’s inflexible and (2) it’s subject to criminality that can outthink the rules.
Let’s address the first point: rule-based “predictive analysis” relies on a static decision input by human programmers who are attempting risk management based on their own (limited ability to achieve) pattern recognition (e.g., because it’s unlikely for a customer to use their credit card at a gas station 100 miles away, we will always flag this purchase).
This problem alone can create negative customer experiences. A more flexible paradigm using AI algorithms and deep learning can ” dive deeper” into patterns of behavior to develop more nuanced fraud detection techniques.
An Example of AI Detective Work
The other problem is that criminals who are constantly working to outsmart fraud detection techniques can outsmart preset rules. Making money is a significant motivating factor driving these criminals. And as mentioned, they also have machine learning models at their disposal.
So, let’s look at a few examples of how AI and ML could prevent payment fraud. Card companies have significant amounts of information with which to perform data analysis. They can see all of a customer’s previous purchases, including the types of things they purchase, when, and where.
Occasionally, these security technologies might be achieved through a partnership of the parties involved. For instance, an e-commerce platform can assess the product pages that a customer moves through. Based on the cardholder’s age, location, and other factors, including the browsing history, AI might stop the purchase based on suspicion.
What are Anomaly Detection and Predictive Analytics?
This type of real-time fraud prevention is based on detecting “anomalies” or events that fall outside the norm of expected behaviors. The list of “expected behaviors” is built on previous events and demographic information.
If an anomalous event occurs outside the scope of these expected behaviors, then AI can stop transactions associated with this event (e.g., sending money or making a purchase). The inputs of what constitutes “expected behaviors” are (as mentioned) things like previous purchase history. However, they may also include demographic information such as age, location, gender, occupation, and others.
Don’t try cracking your head over what these “demographic parameters” might be: organizations will not give them out because criminals could use this list to achieve financial fraud. For instance, if they know that AI would flag a certain transaction as anomalous, they could plan out a road map of how to build up a pattern of “acceptable” transitions until they are finally ready to go big and go home.
Is AI and ML Better Than The Human Brain?
We won’t opine on the question as it generally applies since it addresses a wide range of topics, from self-driving cars to creating high-level art. However, as it applies to analyzing input data to provide security measures, the answer seems to be a resounding yes.
How long would it take a data scientist to browse through 7 years of data points (e.g., 98 statements of tens of thousands of credit card transactions)? How many variables for consideration could they juggle in their head at one time?
It could take a team of data scientists days or weeks to look through this data and develop a set of parameters for fraud prevention. There would have to be meetings, discussions, and delegations to different team members tasked with focusing on specific points.
AI and ML could accomplish this analysis in a few minutes, if not seconds. They can also juggle multiple pathways for consideration all at once. In contrast, a human can (on average) only think about four things at once, if that (according to scientists at the University of Oregon).
The truth is that when it comes to fraud detection, humans cannot keep up with AI and ML. And since criminals are also using ML and AI to commit financial crimes, even to the point of evading image recognition and laboriously building up “acceptable” behavior patterns from stolen data, AI and ML are even more necessary weapons.
Do You Need To Develop Your Own AI Models?
So, what does this all mean for this business? Do you have to learn computer programming to develop AI models and protect your business from criminals? What would that even mean, practically—would you have to build a robot like the maid from The Jetsons?
No, you don’t (unless you want someone to cook, clean, and deliver sassy advice). Most of these fraud prevention and cybersecurity tools are cloud-based offerings you can subscribe to. Many of which your payment processor is probably already using.
As they process your transactions on the back end, they are better poised to work with banks and card networks to leverage AI and ML to flag suspicious transactions. From your end, it can be hard to assess whether a transaction is suspicious over the phone, online, or even in person. But AI and ML can catch it right away.
Even as AI and ML tools are deployed by banks, card networks, and payment processors, it is good to keep consumers in the loop regarding what those technologies entail. They should be made aware of certain types of fraud they can be subject to, such as identity theft. Many banks, card issuers, and even insurance companies now take it upon themselves to offer fraud monitoring and protection tools for their customers.
AI and ML are exciting trends in payment processing. If you have any other questions about how they work and/or fit into the landscape of taking payments, give us a call or drop us a line. We’d love to help you use these tools to secure your transactions and your cash flow.
Frequently Asked Questions About AI and Machine Learning in Fraud
Artificial intelligence (AI) is how machines and computers can mimic human intelligence and behavior, such as learning, reasoning, and problem-solving.
Machine learning (ML) is a subcategory of artificial intelligence (AI). It analyzes patterns and data to perform tasks without explicit instructions or a set of rules.
Artificial intelligence (AI) and machine learning (ML) use algorithms to analyze large amounts of data to detect anomalies, which can identify suspicious transactions that could indicate fraud.
AI and machine learning are efficient. They can quickly and accurately process a large amount of data, reducing manual effort and error. AI-powered systems can also monitor transactions in real time, enabling immediate flagging of suspicious activities. Lastly, AI and machine learning programs can adapt to changing fraud patterns and evolve over time, ensuring continuous protection against evolving threats.