Blog article
See all stories ยป

AI is getting smarter, but it still needs human guidance when it comes to fraud detection

Generative AI (such as OpenAI's ChatGPT) has generated an explosion of interest as numerous sectors have started understanding its unique capabilities for generating content based on simple prompts. There is much discussion about whether this new generation of AI will replace many skilled jobs, and at a basic level, it seems capable of doing so - it can generate images to take the place of artists, write code to replace developers, and write text for a wide range of scenarios to make skilled content creators redundant. But if we look a little deeper, it is implausible that AI will replace humans entirely. The role of a human - interpreting, analysing, understanding, and compensating for data and events outside of the realms of the model is still central to many processes. As a result, there is a role for humans and machines in any business process, and instead of man versus machine, the approach should be man AND machine.

In payments, AI is used heavily in many decision-making processes with financial consequences; those decisions are monitored to ensure the right decision is made while reducing the manual effort involved. Letting the machine make all the decisions might now be possible, but it is crucial that a human is involved in the process to review the decisions the AI is making and is there to guide it and ensure it continues to deliver what it is meant to. AI models live or die by the data they are trained on. Up to 80% of the time it takes to develop an AI model is devoted to ensuring the data is of a high standard and packed with helpful information. Often, smaller, more information-dense data sets perform better than larger, untreated ones. AI is still bound by the limitations of the data, as well as the 'ground truths' it's fed in a classification modelling scenario. A typical fraud model will only utilize what is known and confirmed to be fraud, as this ensures the model can detect that fraud correctly in the future.

Fraud strategy development incorporates both detections at the first occurrence of fraud and detection at the first fraudulent attempt (at the pre-auth stage). Usually, only the actual fraudulent transactions are marked as fraudulent and used in the model, with fraudulent attempts discarded. Fraud managers tend not to include this data or other data that is suspected of being fraud but hasn't been confirmed for fear of future false positives in their models. Not including this data can introduce inaccurate predictions, which negatively affect overall performance by missing specific fraud trends or types and leading to increased false positives. With any classification of confirmed fraud still ultimately being made by an analyst, it's essential to ensure that time delays and human error are taken into consideration; if data has been incorrectly marked, then the ability to unpick the stitches is vital. Without human intervention, there's the potential for a model to deviate and introduce other inaccuracies.

When it comes to ML-driven strategy optimization, such as when using a tool like AutoPilotML, it is essential to review the changes to the fraud strategy it suggests thoroughly. Allowing the machine to automate the fraud strategy fully is ill-advised, and analysts must ensure the suggestions fit the observed fraud trends.The combined team of man and machine is still required today and into the future, not just for fraud and payments but for a wide range of applications. Any model will make decisions defined by the data it is trained on. As a result, if we want to improve a model's performance, human experts must be involved to bridge the gap between a good and an excellent AI model.

When AI is engaged in automation systems, it is even more critical that human experts are involved to review and guide the machine, adjusting and making manual decisions to enable a continued, high-performance output.

 

 

5904

Comments: (1)

Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune 11 August, 2023, 13:11Be the first to give this comment the thumbs up 0 likes

I published a paper on Controlling Credit Card Fraud Through Predictive Analytics back in 2008. In the subsequent 15 years, AI has made rapid strides. Accordingly, it might be able to challenge conventional wisdom in many areas e.g. "AI models live or die by the data they are trained on." 

Today, AI models use not only the data YOU provide (first party data) but have access to huge amount of third party data sourced from e.g. Data Brokers. Accordingly, even if first party data has an error, it's conceivable that AI might be able to correct it. For example, if first party data mentions address as, say, "75 Meridian Place, Off Marsh Wall, SW17 6FF, London, UK", AI can correct the postal code to "E14 9FF". (For all I know, LLMs might be powerful enough to be able to make this correction even without third party data.) 

To the extent that AI has growing resilience to bad quality data, I see the old quip GIGO becoming obsolete in the forseeable future.  

I agree that, at the highest level, it's always about Man + Technology, whether it's AI or any other technology, but, historically, technology has always brought about a change to the nature of the "Man" required to do the job. In line with that, if AI goes mainstream in payment fraud detection and prevention, Prompt Engineers may replace Fraud Case Managers!

Now hiring