Generative AI (such as OpenAI's ChatGPT) has generated an explosion of interest as numerous sectors have started understanding its unique capabilities for generating content based on simple prompts. There is much discussion about whether this new generation
of AI will replace many skilled jobs, and at a basic level, it seems capable of doing so - it can generate images to take the place of artists, write code to replace developers, and write text for a wide range of scenarios to make skilled content creators
redundant. But if we look a little deeper, it is implausible that AI will replace humans entirely. The role of a human - interpreting, analysing, understanding, and compensating for data and events outside of the realms of the model is still central to many
processes. As a result, there is a role for humans and machines in any business process, and instead of man versus machine, the approach should be man AND machine.
In payments, AI is used heavily in many decision-making processes with financial consequences; those decisions are monitored to ensure the right decision is made while reducing the manual effort involved. Letting the machine make all the decisions might
now be possible, but it is crucial that a human is involved in the process to review the decisions the AI is making and is there to guide it and ensure it continues to deliver what it is meant to. AI models live or die by the data they are trained on. Up to
80% of the time it takes to develop an AI model is devoted to ensuring the data is of a high standard and packed with helpful information. Often, smaller, more information-dense data sets perform better than larger, untreated ones. AI is still bound by the
limitations of the data, as well as the 'ground truths' it's fed in a classification modelling scenario. A typical fraud model will only utilize what is known and confirmed to be fraud, as this ensures the model can detect that fraud correctly in the future.
Fraud strategy development incorporates both detections at the first occurrence of fraud and detection at the first fraudulent attempt (at the pre-auth stage). Usually, only the actual fraudulent transactions are marked as fraudulent and used in the model,
with fraudulent attempts discarded. Fraud managers tend not to include this data or other data that is suspected of being fraud but hasn't been confirmed for fear of future false positives in their models. Not including this data can introduce inaccurate predictions,
which negatively affect overall performance by missing specific fraud trends or types and leading to increased false positives. With any classification of confirmed fraud still ultimately being made by an analyst, it's essential to ensure that time delays
and human error are taken into consideration; if data has been incorrectly marked, then the ability to unpick the stitches is vital. Without human intervention, there's the potential for a model to deviate and introduce other inaccuracies.
When it comes to ML-driven strategy optimization, such as when using a tool like AutoPilotML, it is essential to review the changes to the fraud strategy it suggests thoroughly. Allowing the machine to automate the fraud strategy fully is ill-advised, and
analysts must ensure the suggestions fit the observed fraud trends.The combined team of man and machine is still required today and into the future, not just for fraud and payments but for a wide range of applications. Any model will make decisions defined
by the data it is trained on. As a result, if we want to improve a model's performance, human experts must be involved to bridge the gap between a good and an excellent AI model.
When AI is engaged in automation systems, it is even more critical that human experts are involved to review and guide the machine, adjusting and making manual decisions to enable a continued, high-performance output.