Blog article
See all stories »

AI - make it explainable, ethical and used by all

Artificial Intelligence was everywhere at Sibos 2018. Not in the form of sentient bots roaming the halls (although Accenture did have a humanoid robot on their stand), but in the content of plenaries, panels and pitches.

 

Some refused to use the overarching term AI to describe what’s being done today, instead preferring to refer to machine learning (ML) and robotic process automation z(RPA), with the more generalised “true” AI still being somewhere in the future in terms of capability.

 

There are many use cases for technology that sits in the AI spectrum but more are continually being proposed. The main ones currently include:

 

  • Churn prediction
  • Customer service optimisation, particularly around voice platforms
  • Process optimisation
  • AML screening and controls
  • Cross and upsell, and customer resurrection
  • Marketing personalisation

 

Across these emerging use cases for the technology in financial services, four main considerations are emerging as key to AI strategy development. I call these the four Es of AI.

 

Explainable

When using the statistical building blocks of machine learning and deep learning such as classification and clustering to feed into manual decision processes; there would not normally be a need for a detailed explanation about how the models work.

 

But as the models and associated automation achieve more complexity along the AI spectrum and begin to actually make decisions – for example around credit approvals – there will be a need for explanation. These explanations will have different audiences at different times - from internal risk and compliance functions, through to regulators and even the customer. Thereforethe level of detail of the explanation will also vary.

 

At the moment, there is a feeling that some regulators are not as advanced in their understanding of AI as technology companies and financial institutions. In the context of AML controls for correspondent banking, for example, some regulators will look at overall business volumes and the number of suspicious activity reports (SAR) filed. Any ratio between these that is outside the industry norm will draw regulatory investigation. Moreover there is an opportunity for organisations to use AI to get really good at reducing false positives – for example one partnership between a bank and IT vendor claims to have reduced this from 95% of alerts being false to 50%. They need to be able to explain how this is possible to the regulators - but several banks have complained that when this occurs they have had to start this process with a high-school level introduction to AI.

 

Ethical

As the behavior and decision-making factors involved in ML models become more transparent, an ethical filter on decisions becomes crucial. This is particularly true in the consumer space, where decisions that a ML model suggests based on pure economics might not align with softer strategies of a bank, such as financial inclusion.

 

Furthermore, it’s important to understand what data is not included in a model, as this could provide a bigger picture of the customer. This needs to be accompanied with a human review process and amendment by staff who are familiar with the workings of AI. Financial Institutions must also consider that there is a possibility that bias and discrimination could be inherent in past data.

 

Embedded

It’s well accepted that in the future of work people need a continuous evolution of skills. Whist there is a fear that jobs will be lost, several research studies (including one from Accenture) have shown that the financial services industry will actually grow in employee numbers.

 

AI is set to extend our human senses and capabilities and improve our decision making. But to be successful Financial Institutions will invest into developing their workforce, with understanding and skills in AI top of the priority list.

 

A number of major banks are working with universities to develop practical courses about AI in financial services that can attract potential new talent to the organisations and provide training for current employees. Recommendations from experts about how AI can be embedded in everyday work practice include:

 

  • Treat AI as a colleague
  • Leave the administration to AI
  • Focus on judgement work

 

Elemental  

As the practices of analytics, business intelligence and AI have evolved, it’s always been true that garbage data leads to garbage results. Data quality and accessibility remain an issue for all industries, with some predicting that data cleansing will be the blue collar jobs of the future. Arguably financial institutions have the best-view of customer data in the world, so there is a real opportunity for them if they can de-silo data and enable an enterprise wide approach to data and analytics.

 

Data also has gravity, and often financial regulation requires it remains located in particular places and be inaccessible from others. For this reason, machine learning models of today and future AI needs to be able to work across multiple data sources, with models being run against data separately and combined to yield results similar to what could be achieved with a single merged data set.

 

 

6785

Comments: (0)

Now hiring