Speaking at the Temenos Community Forum at Vienna earlier this year, Finextra interviewed Adam Gable, product director of financial crime, treasury, and risk at Temenos, and Hani Hagras, chief science officer at Temenos to discuss how instant payments has been a catalyst to accelerate speed and innovation in financial crime mitigation.
Gable explains that by definition, instant payments demands real-time, so from a financial crime perspective, security checks need to be very quick and accurate. If the process doesn’t run smoothly, this results in interruptions to the customer experience, the loss of customer goodwill, and negative impact to brand reputation. Examples include a check that ends up in a queue for manual review, or if the checks aren’t accurate leading to a customer being defrauded. Banks therefore need to be on top of their security protocols.
“As checks are occurring, a proportion will always need investigation. This takes time which can damage customer experience.” Gable stated. “There is also a cost factor for banks, who when migrating to instant need to be thinking in advance about the systems and processes they have in place ready for this move.”
Hagras also described the way in which AI will be a factor in mitigating crime in instant transactions, as it will leverage automation to provide a seamless user experience. This is coupled with a solid audit trail to make it easier for banks to scale-up. AI can also combat financial crime through sanctions screening and anti-money laundering processes, by automatically detecting where fraud is occurring, while allowing false positives to be processed automatically without interrupting the customer experience.
“Fraud is a continuous battle,” Hagras remarked.
He furthered: “AI will be responsible for ensuring that they generate models which can follow all the different kinds of regulatory requirements, enabling banks and financial institutions to use AI with trust. The key to do this is to use explainable AI. Generated models are easily understood, analysed, and most importantly, augmented and audited by business users and the greater authorities.”
Opaque box models are not accepted by many regulators as they do not sufficiently explain the model’s decision-making processes when generating an output. He concluded that explainable AI processes are a key focus for Temenos, and meeting current and future AI ‘explainability’ regulations will be essential.