Everyone’s talking about artificial intelligence and machine learning (AI and ML). But what’s often overlooked is the degree to which they depend on good data in order to be effective. It’s a symbiosis that is particularly relevant to compliance,
where AI and ML have huge potential to reduce costs and risk.
We recently published a whitepaper aimed at clarifying these “cognitive technologies” and illustrating how they can be applied in different business settings.
Let’s dig a little deeper and discuss the tight interdependence of data and ML in the context of anti-money laundering and regulatory compliance. As their name suggests, ML algorithms need to learn – a process that begins by “training” them on specific datasets.
You can’t just use any data for this. It needs to contain the correct answer or actual outcome, otherwise known as the “target”. As the algorithms are run over and over, they find patterns in the data that map the input data to the target. The result is a
“machine learning model” that captures these patterns and relationships, and can then be used to generate expected outcomes for new data sets. To recap, a truly accurate, trained model requires excellent data as well as highly advanced algorithms. Hold that
thought for now.
Take a look at the financial services industry. Here, the continuing rise in both financial crime and government regulations means financial institutions need to know a great deal about the money they’re taking in and paying out. All of this screening and
probing creates mountains of false positives, which are burying compliance departments around the globe. Naturally, firms are rushing to see whether they can apply AL and ML to relieve the pressure on their analysts and drive efficiency – while still maintaining
a high level of protection.
However, many risk and compliance executives encounter solution providers that offer either the data or the algorithms, but not both. Data providers can deliver tremendous amount of raw material – about watchlists, sanctions, adverse media and politically
exposed persons – to help provide deeper insight around potential customers and partners. However, without a true ML model, all of this data is just that. More data. And more data can create more workload, not less. Conversely, technology companies with powerful
AI engines offer platforms that claim to be ready to dramatically reduce false positives, but their algorithms are completely untrained. For customers, this means a tremendous amount of work is needed just to validate that the resulting model is accurate.
So neither data companies nor AI engine providers alone can fully solve the KYC/AML screening challenge. Each brings half of the solution. But if you can apply advanced, purpose-built algorithms to great data, you have the answer.