Blog article
See all stories »

Artificial Intelligence: The Next Step in Financial Crime Compliance Evolution

Financial Services compliance departments are constantly turning to technology to find efficiencies and satisfy increasingly tough regulatory examinations. It started with simple robotics, which can provide great operational efficiencies and help standardize processes. Never ones to rest on their laurels, compliance departments have begun looking to Artificial Intelligence (AI) as the next technological step to enhance and improve their programs. PayPal has cut its fraud false alerts in half by using an AI monitoring system that can identify benign reasons for seemingly bad behavior. HSBC recently announced a partnership to use AI in its Anti-Money Laundering (AML) program. Despite the adoption by some large players, there is still a lot of hesitancy and concern about the use of AI in financial crimes compliance.



AI is computer software that can make decisions normally made by a human. What does this mean? In essence this means that it is computer software that can analyze large amounts of data and use patterns and connections within that data to reach certain results about that data. 

Just like people, AI needs to learn in order to make decisions.  It can do this in two ways: supervised or unsupervised learning. Supervised is the most common method, whereby data, the goal, and the expected output of that data are provided to the software allowing it to identify algorithms to get to the expected result.  Supervised learning allows AI to use a feedback loop to further refine its intended task. If it identifies potential fraud, that turns out not to be, it can incorporate that feedback and uses it for future evaluation.

Unsupervised learning provides the software with only the data and the goal, but with no expected output. This is more complex and allows the AI to identify previously unknown results. As the software gets more data, it continues to refine its algorithm, becoming increasingly more efficient at its task.



While there are varied uses in this space, one of the most relevant is to monitor transactions for potential criminal activity. Instead of using rule-based monitoring that looks for very specific red flag activity, AI software can use a large amount of data to filter out false alerts and identify complex criminal conduct. It can rule out false positives by identifying innocuous reasons for certain activity (investigation that normally needs to be done by an analyst) or see connections and patterns that are too complex to be picked up by straight forward rule-based monitoring. The reason it is able to do this is that AI software acts fluidly and can identify connections between data points that a human cannot. Its ability to analyze transactions for financial crime is only limited by the data available to it. Some specific uses are:

Fraud Identification: Identifying complex fraud patterns and cutting down on the number of false alerts by adding other data (geolocation tagging, IP addresses, phone numbers, usage patterns, etc.). See Paypal’s success in the first paragraph.

AML Transaction Monitoring and Sanctions Screening: Similar to fraud identification, it can greatly reduce the amount of false alerts by taking into account more data. It can also identify complex criminal activity occurring across products, lines of business, and customers.

Know Your Customer: Linkage detection between accounts, customers, and related parties to fully understand the risk of a party to the bank. Also, through analysis of unstructured data it can identify difficult to identify relevant negative news.  

Anti-Bribery, Insider Trading, and Corruption:  It can be used to identify insider trading or bribery by analyzing multiple source of information including emails, phone calls, messaging, expense reports, etc.



Seems amazing, right? You might be wondering why everyone isn’t immediately implementing these solutions throughout their financial crime compliance programs. While there have been some early adopters, there is still a lot of hesitation to use AI in the Financial Crime compliance space due to the highly regulated nature of the field. There is no doubt that AI will bring a huge lift in the future, but here are some of the concerns that need to be ironed out before we see large scale adoption:

“Black box” image of AI decisioning

By using more data than a human could synthesize, it may select patterns and results that wouldn’t necessarily make sense to a person. As a result, AI providers need to ensure that AI derived decisions are supported by an auditable rationale that is clear to person. Clear documentation around how the AI gets to its results will be necessary.

Algorithmic Bias

Because AI software functions are based on the data it is provided, the impact of misinformation or biased information could be very large. This can occur when unintentional bias within the source data and training is uploaded into the algorithms the AI uses to perform its task. No one wants to end up with an AI transaction monitoring system that is flagging transactions based on racial or nationality bias.

Lack of regulatory acceptance

Currently, there appears to be a lack of regulatory acceptance mostly due to the first two concerns described above. That being said, in the United States, the Securities and Exchange Commission and the Financial Industry Regulatory Authority are both working on limited use of AI in their organizations. This is a strong step in having them able to understand and test it.



Now you know how AI can help your program and some of the concerns you need to be mindful of, but what now?  Here are a couple of next steps you can take to successfully implement AI into your Financial Crime Compliance Program:

  • If you don’t have them already, start bringing data scientists into your team. They are the experts you will need as your begin your foray into this area.
  • Introduce AI slowly into a single process to gain confidence and buy-in from others.  Fraud may be a good starting point, similar to PayPal’s approach.
  • Think about implementing an AI system in parallel with a rules based system. Identify which had least false positives but didn’t miss reportable activity – this is a good way to show its effectiveness.
  • If you have implemented AI:
    • Make sure you keep it up to date with new scenarios, data, etc., as the baseline information becomes stale (e.g., new money laundering or fraud techniques).
    • Test for unconscious and unintended bias.

Lastly, knowledge is power. Keep researching and make sure you understand the reality of what AI can bring to the table for you and your program.




Comments: (0)

Blog group founder

Member since




More from member

This post is from a series of posts in the group:

Financial Services Regulation

This network is for financial professionals interested in staying up to date on financial services regulation happening anywhere in the world. CFOs, bankers, fund managers, treasurers welcome.

See all

Now hiring