Join the Community

23,178
Expert opinions
43,805
Total members
380
New members (last 30 days)
181
New opinions (last 30 days)
29,037
Total comments

Banks Beware Agentic AI Blackboxes in Modernisation Plans

When using an GenAI applications to spin up a new ad for say a sandwich, the image is stunning and the headline cracking but you realise too late that the bread depicted isn’t a seeded slice of bread and the copy mentions cream cheese which isn’t there. The ad must  be pulled and fixed. There’s egg on someone’s face for not spotting the errors but the damage is limited.

For marketing, being only 98% correct maybe fine (probably not though!) and learning by mistakes is what you expect. However, any tiny errors in how you process a mortgage lending request or a credit card dispute, can be seriously damaging for a financial services firm. In fact, they can incur big  financial losses, reputational damage, and cause customer dissatisfaction.

So, as the industry begins to explore agentic AI and its ability to do so much more autonomously, the anxiety about the technology generating even minor inaccuracies is leading some organisations to push the pause button about applying agentic AI outside some basic chat box use cases. Many banking leaders are quite rightly thinking through the current rule based and policy based frameworks their teams work within. Yes, they can make judgements on things like an approval for that mortgage or business loan, but it is always within parameters or within an escalation framework. And this is what is needed for agentic success also.

Some agentic AI enthusiasts are making proud claims that they are targeting 95%+ agent accuracy. But when is 95% accuracy acceptable? If you went to a CIO and pitched your  enterprise cloud solution promises 95% uptime, you would be kicked out of the room. On the same basis, would you trust a bank that got 5% of your transactions wrong?

For agentic AI to spread in the sector, organisations should beware the first flush of agentic AI technology available which is not yet fully suitable for regulated industries. Of course, the logical way forward is agentic AI that aligns with the regulatory regimes that govern the industry.

Perhaps the best way to achieve this is how the powerful new reasoning power offered by the agentic AI concept is harnessed to the predictability of workflow software that governs how complex and regulated processes should proceed within clear guardrails. Surely that would make a connection to those rules and policies and escalation frameworks that all work as guardrails against errors. Of course, mistakes are made by humans too, but it’s about minimising and having reasonable checks and balances.

There has always been a concern about opaque or blackbox AI solutions that cannot be opened up to remedy errors or make important changes to align with new rules and regulations. Certainly, the best statistical and predictive AI has the ability to be very transparent in how it works and how it got to its conclusion. The same needs to apply to agentic AI. For agentic AI to be successful in banking, it is clear that agentic AI solutions must be transparent and enable organisations to know that every agent is predictable, audited, and optimised for business success.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

23,178
Expert opinions
43,805
Total members
380
New members (last 30 days)
181
New opinions (last 30 days)
29,037
Total comments

Now Hiring