Join the Community

21,966
Expert opinions
44,091
Total members
434
New members (last 30 days)
161
New opinions (last 30 days)
28,667
Total comments

How Generative AI and synthetic data can be used to train fraud models and improve detection rates

The possible applications of generative AI have been explored by many in recent weeks. However, one major unexplored topic is how fraud analysts can use data created by generative AI to augment and improve their fraud detection strategies and the implications of using synthetic data to train fraud models and improve detection rates.

 

It is well known in data science circles that the quality of data presented to a machine learning model makes or breaks the result, and this is particularly true for fraud detection. Many machine learning tools for fraud detection rely on a solid fraud signal –typically lower than 0.5% of the data, making any model challenging to train effectively. In an ideal data science exercise, the data used to train an AI model would contain a 50/50 mix of fraud/non-fraud samples, but this is tricky to achieve and unrealistic for many. While there are many methods for dealing with this (class) imbalance, such as clustering, filtering, or over-sampling, they don’t entirely make up for an extreme data imbalance between genuine and fraudulent records.

 

Generative AI, the application of transformer deep neural networks, such as OpenAI’s ChatGPT, is designed to produce data sequences as output and must be trained using sequential data, like sentences and payment histories. This differs from other AI and ML methods, which produce single ‘classifications’ (fraud/not fraud) based on presented input and training data, which are inputted into a model in any order. Conversely, a generative AI model’s output can continue indefinitely, while classification methods tend to produce single outcomes.

 

As a result, generative AI is an ideal tool for synthetically generating data based on actual data, and the evolution of this technology will have critical applications in the fraud detection domain, where, as previously highlighted, the amount of viable fraud samples is limited and difficult for an ML to learn effectively from. 

 

With generative AI, a model can use existing patterns and generate new, synthetic samples which look like ‘real’ fraud samples, boosting the fraud signal for core fraud detection ML tools.

 

A typical fraud signal is a combination of genuine and fraudulent data. The genuine data will (usually) come first in the sequence of events and contains the actual behavioural activity of a cardholder, for example, with fraudulent payments mixed in once a card/other payment method is compromised. Generative AI can produce similar payment sequences, simulating a fraud attack on a card, which augments training data to assist the fraud detection ML tools and help them to perform better.

 

One of the biggest criticisms of OpenAI’s ChatGPT is that today’s models can produce inaccurate or ‘hallucinogenic’ outputs – a flaw many in the payments and fraud space are rightly concerned about, as they do not want their public tools, such as customer service chatbots presenting false or made-up information. However, we can take advantage of this ‘flaw’ for generating synthetic fraud data, as artificial variation in synthesized output can generate unique fraud patterns, bolstering the fraud detection performance of the end fraud defence model.

 

As many will know, repeated examples of the same fraud signal do not effectively improve detection, as most ML methods require very few instances of each to learn from. The variation in generated outputs from the generative model adds robustness to the end fraud model, enabling it to detect the fraud patterns present in the data and spot similar attacks that can be missed using a traditional process.

 

This type of capability may be slightly alarming for cardholders and fraud managers – who are right to ask how a fraud model trained on made-up data can help to improve fraud detection and what the benefits of doing so may be. They might not realize that before any model is used on live payments, it undergoes rigorous evaluation exercises to ensure expected performance. If the model doesn’t meet the expected high standards, it is discarded, and replacement models are trained until a suitable model is found. This is a standard process and one that is followed with all produced ML models, as even models trained on authentic data can end up delivering sub-standard results at the evaluation stage.

 

Generative AI is a fascinating tool with many applications across various industries, but today’s iterations, however clever, have their limits. Fortunately, the traits viewed as severe issues for some sectors are essential features for others, but strict regulation and governance are required. Future usage of generative AI requires a complete review of how models that are trained on partially generated data are used, and governance processes need to be strengthened accordingly to ensure the necessary behaviour and performance of the tools are constantly met.

 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,966
Expert opinions
44,091
Total members
434
New members (last 30 days)
161
New opinions (last 30 days)
28,667
Total comments

Trending

Now Hiring