Blog article
See all stories »

Troubles with black box AI or ML models

One of the difficulties in stepping into a red-hot technology space is you’re not sure what to expect. As you’re grappling with unexpected technical curve balls what if your own stakeholders beat you to death after seeing the results which are wrong by their expectations or understanding. The current nascent implementation of Artificial Intelligence and Machine Learning models are not a smooth sailing by any stretch of imagination either.

As much I am excited to work with the new technology and get new futuristic visions to see the sunshine of reality, I also know the labor that goes in to get it out there in a production system is no less than breaking a mountain, and what if you realize that the rocks underneath is not conducive to build a smooth road. All that effort gets dumped in a breakneck speed and with it goes the dream of bringing something new to life and get it implemented.

Imagine a below conversation between Mark (IT Project Manager of a Machine Learning project) and Shawn (Head of Sales Team for whom the models were built).

Mark: Hey Shawn, good news the models we build are now deployed to Production

Shawn: Great, let’s get fresh feeds & generate predictions of our Risk exposures for next quarter

Mark: Sure, data feeds are already in and models will be predicting the results by evening

Shawn: That’s very good as we must get these included in our quarterly Risk portfolio too.

After 5 hours the complexion of the conversation changed to something like this…

Shawn: Mark, I saw the reports, how the hell it shows my Risk portfolio is bumped up by 3% to Medium risk 12% instead of stable 9%, can you give me the top 5 reasons for this risk assessment.

Mark: I am afraid I may not be able to give you the top 5 features as there is no way to know the role each feature play in the results that come in.

Shawn: No Mark, that’ll be difficult to digest, as I may be asked for the reasons for sudden bump in my risk exposures. What justification can I give?

And this conversation will lead to an awkward silence or uncomfortable moments. What ends up happening is that Shawn will lose trust on the results coming out of the models. All the hard work going in to bring the results from the jazzy Machine learning models have now end up in justifying their legitimacy.

A lot of ML or AI projects go through this uneasy phase till people realize the underlying black box nature of the model. The question to bring justification to the results is the biggest challenge for any ML / AI model project’s manager. How to bring the trust?

Just to avoid this scenario, people are exploring Explainable AIs. Unfortunately, designing Explainable AI, that is interpret-able enough that humans can understand how it works on a basic if not specific level, has drawbacks. I’ll cover this in my next piece. Keep an eye for the details.

 

7059

Comments: (0)

Shailendra Malik

Shailendra Malik

SVP - Tech Delivery (Data Platform)

DBS Bank

Member since

25 Apr 2016

Location

Singapore

Blog posts

51

Comments

20

This post is from a series of posts in the group:

Trends in Financial Services

A community to discuss the future of financial services and any other interesting trends, strategies, ideas, views.


See all

Now hiring