Blog article
See all stories »

Boosting the Impact of AI in Banks With Inclusive AI

Making impactful change with the power of data and AI is an opportunity rooted in reality. It requires the practical application of common sense, combined with some strong engineering and data science talent and supported by the right dedicated technology platform.

It also requires data. Not just any data: relevant data, quality data, and accessible data, offering the right depth of coverage. This is an area where banks could have a unique differentiator, as they — and financial services generally speaking — are inherently tied to data and its role: to price credits, identify insurance premiums and assets, value trades and portfolios, and more. 

However, despite this intimate connection to data, banks often suffer from the absence of a strong data culture essential to rapid AI scaling. As an illustration, when Dataiku ran a survey of 400 data professionals across the U.S. and Europe in 2020, only 52% of respondents said that their organisations have processes in place to ensure data projects are built using quality, trusted data. 

There are multiple underlying reasons for this. Firstly, as heavily regulated players, banks are particularly cautious in opening data access throughout their organisations. Secondly, the traditional product approach of information systems, combined with a history of M&A in an industry marked by a gradual consolidation of the market, has led to fragmentation of data in the banks — and it’s acting as a further barrier to eased unified data access.

Lastly, the focus of IT investments on regulatory projects has in many cases delayed the company-wide implementation of data access and data science development tools, preventing business experts from becoming active players of the AI revolution banks aim to go through. 

Despite all this, some decide to overcome these challenges and take the bet to significantly accelerate the democratisation of data access and data science in their organisations, with tangible returns. In financial services, there are several banks that have made the move to creating spaces where data is accessible. They are opening sandboxes where people can learn, experiment, and put governance in place that creates a positive incentive to testing. By doing so, they decide to activate what we could call the “self-service analytics/AI win-win cycle.”


What is at stake? 

In short: organisations can accelerate scaling through a collaborative approach, enabling a timely gap bridging data scientists and business experts. It all goes back to the virtues of collaboration, so that even business people, without becoming experts, are in a better capacity to actually iterate in the right manner with data scientists. 

In the same environment, data scientists get closer to the business reality so that they really understand core value chains — and support their enhancement. Addressing the challenge to best monitor market liquidity is not a pure data science game. It is only by combining the unique experience of long-tenured traders and data scientists that banks will develop relevant markets. 

We may come to know it as ‘Inclusive AI,’ or simply a concept that encompasses the idea that the more people that become involved in AI processes, the better the outcome (both internally and externally) because of the diversification of skills, points of view, or use cases. 

This also implies the tight involvement of risk management throughout the process. The heaviness of model risk management is often pointed out as a barrier to time-to-market. What if there could be a way to streamline this process by creating much more parallelism, in a shared environment, respectful of segregation of roles and supporting interactions?

What this approach would mean in practical terms is operating an inclusive approach — not restricting the use of data or AI systems to specific teams or roles, but rather equipping and empowering everyone at the company to make day-to-day decisions, as well as larger process changes, with data at the core.

Often, that’s easier said than done, as it’s not always easy to get everyone to buy in on this approach. What some describe as a bet can be seen by others as a risk — or as many risks: the risk of wrong data usage, of wrong models being developed, or of ill-optimised models being randomly pushed to IT for operationalisation. While we encourage democratisation, we believe it should be done in a governed manner, with strong gating systems guaranteeing the adequacy and efficiency of all operationalised analytics and ML projects. 

What is clear though is that this approach has the unique benefits of engaging all key stakeholders, starting with the business experts themselves, who are essential to the development of AI projects capable of being embedded in critical business processes. 

 

 

1450

Comments: (0)

Sophie Dionnet

Sophie Dionnet

VP Strategy

Dataiku

Member since

10 Nov 2020

Location

Paris

Blog posts

2

This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services


See all