Avoiding artificial stupidity

Avoiding artificial stupidity

While most financial applications of artificial intelligence have been in the customer service space, there are other areas that banks are working with to improve through implementation of innovation technology.

Prag Sharma, head of emerging technology at Citibank’s TTS Global Innovation Lab, highlights in conversation with Finextra that there are a few main areas that financial institutions are working to improve with artificial intelligence technology, other than chatbots.

Sharma explains that at Citibank, they are also working to improve operational efficiency with artificial intelligence because there are several predictions that the bank needs to make, for example predicting customers behaviours from a transactions or liquidity perspective and or around detection outliers in payments data. He adds that natural language processing is beginning to get used to handle the millions of documents that are usually processed manually.

“We’re a bank so compliance is a key area of focus. We’re looking at regtech and how that is going to affect us in the future, where we can make it easier for ourselves to have processes in place that continuously monitor various activities with natural language processing as the key technology enabler.

Gulru Atak, global head of innovation at Citibank’s TTS Global Innovation Lab, says that the bank also have a platform called Citi Payments Outlier Detection that leverages machine learning to detect outliers in corporate customer payments.

On this point, Sharma says: “It’s a good example of Applied AI if you’re looking at what banks are interested in today, which is using AI to look at payments transactions and find anomalies. We could have bought something off the shelf and applied it, but we as an organisation looked at it and tried to figure out whether this would add serious value and truly understand the underlying algorithms, without having to rely on third parties because we understand our data better than others.”

Atak says that the bank also has a platform called Citi Payments that leverages machine learning to detect outliers in corporate customer payments.

Financial crime

As International Banker explained in an article last year, while chatbots appear to be the most visible use case of artificial intelligence and developments are being made in algorithmic trading, AI is also making considerable inroads in the compliance and security space. Money laundering continues to be a problem in the global financial services and banks such as HSBC are exploring their options to combat this issue.

The article goes on to point out that in April 2018, HSBC had partnered with big data startup Quantexa and had piloted AI software to combat money laundering, which follows the bank’s partnership with Ayasdi to automate anti-money laundering investigations that were being processed by thousands of human employees.

“The aim of the initiative is to improve efficiency in this area, especially given that the overwhelming majority of money-laundering investigations at banks do not find suspicious activity, which means that engaging in such tasks can be incredibly wasteful. In the pilot with Ayasdi, however, HSBC reportedly managed to reduce the number of investigations by 20 percent without reducing the number of cases referred for more scrutiny.”

But how then do companies make the most of the tools and techniques that AI offers now but also prepare for what the future might look like? PwC highlighted that despite hundreds of millions being invested into technology that fights financial crime, many financial institutions are still struggling, but continue to rely on what would be considered legacy infrastructure to keep up with new and evolving threats.

“These models tend to be based on black and white rules and parameters; for example, if a transaction is over $10,000 or a person uses a credit card overseas, then it gets flagged. The problem with simplistic approaches like this is that they tend to throw up an enormous number of false positives. And in an environment of increased regulation, increasing competition and increased cost pressures, it doesn’t make sense to have your team trawling through thousands of alerts that don’t represent real financial crime tasks.”

PwC explained that financial services companies are aware that AI is a faster, cheaper and smarter way of tackling financial crime, but there is a lot of confusion around how organisations should harness this technology - “just because a certain technique is feasible doesn’t mean that a company is in a position to apply it immediately.”

To remedy issues with financial crime, PwC suggested using AI to scan enormous amounts of data and identifying patterns, behaviours and anomalies because the technology can faster than humans can. “It can analyse voice records and detect changes in emotion and motivation that can give clues about fraudulent activities. It can investigate linkages between customer and employees and alert organisation to suspect dealings.”

KPMG delved deeper into this problem in its 2018 report, ‘The role of Artificial Intelligence in combating financial crime,’ which explored how robotics process automation (RPA), machine learning, and cognitive AI can be adopted or combined to solve issues with financial crime today.

However, KPMG advised making “a reasoned decision as to what type, or mix of types of intelligent automation a company should implement, financial crime stakeholders first need to design an intelligent automation strategy.

“This strategy depends on what investment the institution is willing to make and the benefits sought, including a weighting of the risk potentially involved, and the level of efficiency and agility desired. Therefore, the intelligent automation strategy should be aligned with the size and scope of the institution and its risk tolerance.”

KPMG also pointed to specific areas in financial crime compliance where intelligence automation could be used to reduce costs and increase efficiencies and effectiveness. For transaction monitoring, the first being the need for institutions to build on alerts and cases that have previously occurred, building on any existing machine learning models to establish a domain knowledge base that the cognitive platform can rely on.

“It is the key to monitoring the risks the institution already knows and identified. Instead, it looks at patterns that exist in the data to identity if those patterns have been seen previously.” The second suggestion from KPMG was to use machines to “automate aspects of the review process and deployed to build statistical models that incorporate gathered data and calculate a likelihood of occurrence (closure or escalation).”

The third point was to employ bots to scan the internet and public due diligence sites “to collect relevant data from internal and other acceptable sources,” which would save analysts vulnerable time. For Know you Customer (KYC), the report identified areas such as applying judgement to these domain areas using RPA and machine learning. This allows financial crimes officers to make KYC a priority because the information they obtain better reflects actual risks.

In addition to this, machine learning can also automate the extraction of data from unstructured documents, while RPA can enable institutions to be provided with a more reliable and more efficient customer-risk rating process, and in turn, more of a real-time risk assessment. RPA also has the potential of reducing, or even eliminating, the need to contact customers repeatedly.

Alongside this, in a speech given by Rob Grupetta, head of the financial crime department at the Financial Conduct Authority at Chatham House in November 2018, he pointed to how “the spotlight is squarely on machine learning,” which has been “largely driven by the availability of ever larger datasets and benchmarks, cheaper and faster hardware, and advances in algorithms and their user-friendly interfaces being made available online.”

Grupetta continued: “But financial crime doesn’t lend itself easily to statistical analysis - the rules of the game aren’t fixed, the goal posts keep moving, perpetrators change, so do their motives and the methods they use to wreak havoc. Simply turning an algorithm loose without thinking isn’t a suitable approach to tackling highly complex, dynamic and uncertain problems in financial crime.

“That’s not to say we can’t use algorithms and models alongside our existing approach to help us be more consistent and effective in targeting financial crime risks. Consider building a risk model using algorithms: using a set of risk factors and outcomes, we could come up with a kind of mathematical caricature of how the outcomes might have been generated, so we can make future predictions about them in a systematic way.

“For example, in a money laundering context, the risk factors could be a firm’s products, types of customers and countries it deals with, and the outcomes could be detected instances of money laundering. Unfortunately, it’s quite difficult to acquire robust figures on money laundering as industry-wide data is hard to come by, and criminals aren’t exactly in the habit of publicising their successes. Crimes like money laundering - a secret activity that is designed to convert illicit funds into seemingly legitimate gains - is particularly hard to measure.” To resolve this issue, he explained that the FCA had introduced a financial crime data return back in 2016 which would provide an industry-wide view on key risks that banks face, which would then target supervisory resources that are exposed to inherent risk.

“We are moving away from a rule-based, prescriptive world to a more data-driven, predictive place where we are using data to help us objectively assess the inherent financial crime risk posed by firms. And we have already started experimenting with supervised learning models to supervise the way we supervise firms - ‘supervised supervision’, as we call it.”

Substituting humans

However, while on one hand, AI technology can reduce the number of times that a customer needs to be contacted, in the example highlighted above, fears are also amounting around the substitutability of bank employees, as the International Banker article also discussed. While there have been many statistics and news articles bandied about, Lex Sokolin, global director for fintech research firm Autonomous Next revealed that AI adoption across financial services could save US companies up to $1 trillion in productivity gains and lower overall employment costs by 2030.

The article also pointed to ex-Citigroup head Vikram Pandit’s expectation that AI could render 30% of banking jobs obsolete in the next five years, asserting that AI and robotics “reduce the need for staff in roles such as back office functions”. Japan’s Mizuho Group plans to replace 19,000 employees with AI-related functionality by 2027, and recently departed Deutsche Bank CEO John Cryan once considered replacing almost 100,000 of the bank’s personnel with robots.

However, conflicting data suggested that AI may also result in a rise of banking jobs, as revealed by a recent study from Accenture that found that by 2022, a 14% net gain of jobs is likely to occur in jobs that effectively use AI, in addition to a 34% increase in revenues. Accenture also finds that the most mundane human jobs will be replaced by robots, and leave banking employees to focus on more interesting and complex jobs, improving work-life balance and helping career prospects.

In conversation with Finextra Research, Paul Hollands, chief operating officer for data and analytics for NatWest, highlights that this could be a problem, because there is a skills gap and “there is a change in the skills required in all organisations as the ability to use machine learning, to use robotics and artificial intelligence increases.”

Hollands goes on to discuss how employers have a right to ensure that the people within the
organisation also have the core skills to help them grow. Oaknorth’s Amir Nooralia also had a similar attitude and says that it is not about “machine replacing man (or woman), but rather machine enhancing human. Think Iron Man suit boosting a human rather than an all-knowing robot.”

Like the healthcare sector that will continue to require a human’s emotional response, “when it comes to finance, it is very personal and there are situations that will require empathy and emotional intelligence - e.g. a customer who might be experiencing anxiety of mental stress as a result of debt. It’s not like travel where the process involves getting from A to B, or retail which is purely transactional, so the human element is less important.”

Nooralia then goes on to reference a recent Darktrace whitepaper, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks in which the organisation believes that in the future, “malware bolstered through AI will be able to self-propagate and use every vulnerability on offer to compromise a network,” Nooralia says.

In the whitepaper, Darktrace state that “instead of guessing during which times normal business operations are conducted, [AI-driven malware] will learn it. Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel, it will be able to gain an understanding of what communication is dominant in the target's network and blend in with it."

Nooralia adds: “A human will always be guessing and will never be able to learn as quickly as a machine can, so it is inevitable that the machine will be better in comparison.”

Finextra's The Future of Artificial Intelligence 2019 report explores how the financial services industry can leverage tried and tested experiments of AI in other industries to transform how transaction services can be reshaped.

Download here

Comments: (0)

Trending