Blog article
See all stories »

How To Avoid Nightmares When Onboarding AI-Powered Regtech

The rapidly expanding realm of artificial intelligence (AI) presents a powerful tool for meeting the requirements of regulatory oversight. But like any powerful tool, using it incorrectly can lead to disaster. I have seen firms allocate large budgets to onboard extremely advanced technology to solve a particular challenge, but their efforts are not always successful. In my experience, many of the challenges businesses are trying to solve are interconnected, but they often aim to solve only one area with a sophisticated technology without understanding the technology itself and its impact, which can lead to more failures.

As a regtech expert, part of my role is to help clients in banking/finance undergo digital transformations to address regulatory challenges. I believe introducing AI products and strategies as part of managing regulatory stipulations must be introduced methodically and with recognition that AI is not a magic bullet. There’s a lot of hype, but it’s a new technology that, as with any product rollout, can have new problems embedded within it. So proceed with caution.

But with the amount of information banks are required to review and produce as part of the examination process, AI offers a vital opportunity not only for efficiencies and automation but also the forensic tools of big data analysis. There’s a reason AI is a focus of compliance departments.

Analyze data for pre-existing biases.

AI is based on identifying patterns within a set of data and making outputs based on that information. But if there is a bias within the original data, not only will the flawed information be regurgitated, but the AI might amplify it and make it more pronounced. Great care has to be taken — using human intelligence — to analyze data sets for preexisting biases and errors. Potential biases include the programmers’ own bias of ethnicity, gender or economic groups, while errors might include incomplete or inaccurate data.

Guard against legal and compliance risk.

The same cautions hold true for the underlying programming of the AI code. This new tech is innovative, but ultimately, it’s still just a computer program — and faulty coding will create faulty algorithms, which will create faulty results. In my experience, a helpful way to guard against such risks is to install quality checks and balances in the system to eliminate anomalies. It is also important for boards to be educated and trained to understand the capability of AI and its nature.

Know how your AI makes decisions.

It is extremely important to understand how an AI-powered system is making decisions. Deep learning in AI sometimes involves a lot of complex mathematical calculations that are difficult for an end user to understand. As a result, explainable AI is gaining a lot of momentum.

The concept of explainable AI, also known as glass box AI, initially emerged from the U.S. Department of Defense in an effort to ensure that military robotic equipment is making sound decisions. Black box AI, on the other hand, consists of unsupervised machine learning capabilities that are based on training data sets. The AI program evolves on its own, and an output is produced without an explanation of why that decision has been made. This could lead to a number of situations, such as predicting a wrong health condition for a patient or investing a risky portfolio in the market. But explainable AI is programmed with an interface engine that provides an explanation of how it reached a particular decision.

AI decision making can be more transparent by implementing various practices, such as algorithmic auditing. Auditing algorithms and knowing the variables used in them to reach a specific decision allows for more transparent understanding about the decision making process.

Build controls to avoid infiltration of malicious code.

As with all things banking, cybersecurity is of paramount concern. Hackers targeting AI systems is a growing concern, and I believe part of the reason why is because there’s an assumption that AI will run itself and robust oversight is not as important. Various controls and security measures should be in place for protection against threats. A thorough risk management strategy should be defined for potential misuse and vulnerabilities. Consider training your workforce developing these AI applications to follow certain security protocols.

Adopt a controlled experimentation approach.

I’ve observed that an underlying strategy when introducing AI to compliance operations is to move slowly. Build the data and code gradually and methodically. The paradox is AI offers great leaps forward in the speed of analysis, but it also demands a slow approach. Controlled experimentation is the watchword, and doing things the “old, slow way” while getting the “new, fast way” fully functional is a positive tradeoff.

The buildup should be phased, with simpler aspects of the process being activated first. You should then analyze what you’ve built so far and move onto more complex or critical functions only after initial areas have been reviewed and fine-tuned. This should be a methodical process with a significant upfront investment in resources to realize savings down the road.

Finally, backend maintenance will have to be constant and fully supported by the institution. Changes in regulations and the constant refinement of the AI process will need to be continuously appraised. Financial crimes are evolving and becoming more sophisticated, so your AI needs to be powered by current data versus historical data to combat these. 

6197

Comments: (0)

Breana Patel

Breana Patel

CEO | Thought leader in Bank Risk & Regulations

Bonova Advisory | Risk &Regulatory Advisory

Member since

06 Sep 2017

Location

New York

Blog posts

46

Comments

7

This post is from a series of posts in the group:

Banking Strategy, Digital and Transformation

Latest thinking in respect to Banking Strategy, Digital and Transformation. Harnessing our collective wisdom to make banking better. Ambrish Parmar


See all

Now hiring