Blog article
See all stories »

The Costly Consequences of Unethical AI Whisperer

                         

Yes.. I am talking about the AI applications - our myriad AI applications and upcoming ones whispering to humans about what to do…. how to do it…but not about the whisperers who interact with the AI chatbots.

According to IDC predictions, the global AI market could reach over $500 billion by 2024 – a more than 50% increase from 2021. This indicates that we moved from business experiments to accepting that it is an integral part of enterprise strategy for all sizes. It is a necessary tool to turn data into insights to spark action based on better decisions. No one is debating the benefits of AI to decrease business risk and amplify ROI with innovation. But, as always, there is a …BUT... the unbiased AI is easier said than done.

Critical to the business, these AI models need to operate reliably with visibility and accountability. Otherwise, failure, in this case, has dire consequences that impact any company's cash flow and might even result in legal matters. The only way to avoid this is automation and transparency to answer one question "Can you prove this AI application/workload is built ethically." Aka…how do you govern? And can you prove that it is being governed continuously?

This is where companies like IBM have invested in AI governance to orchestrate the overall process of directing, managing, and monitoring the AI activities of the organization. The primary job is to ensure all the business units stay proactive and infuse the governance framework into the initiatives to strengthen the ability to meet ethical principles and regulations. Especially, regulated industries like Banking and financial services are legally required to provide the evidence to satisfy regulators.

The influence of AI is growing exponentially in the Financial services sector due to the tremendous pressure of digital transformation. As said, its easier said than done because:

1.   Confidently operationalize the AI apps:

In some cases, models are built without clarity and cataloging; needless to say, monitoring slips away in the midst of everything to track the end-to-end life cycle. While banks are struggling with legacy applications, automating the processes to create transparency and explainability became harder and, in turn, a black box. No one knows why/how decisions were made. The new apps tangled with legacy apps never see the daylight though a huge ROI is associated with it due to quality and unperceived risks.

That brings us to our second point – managing the reputation risk

2.   Manage Reputational risk along with the overall risk

I have asked #chatGPT and #Bard - who is Padma Chukka. #ChatGPT refused to answer even if I changed the question multiple ways. Still, Bard gave me a detailed answer, including my LinkedIn profile…but data is from various sites where my old profile still exists as part of the speaker bios. From that point on, I am yet to open the Bard. That quickly, I was turned off, aka – Reputation risk. Suppose I can turn off a simple chatbot when I realize the data might be inconsistent. How could I not make sure before deciding to buy an AI-infused application to conduct critical business?  Reputation risk is an essential factor that sometimes companies forget. If you quantify the reputational risk, one can see the tremendous impact on the business if one is not proactive.

To add to the complexity, the third one is…

3.   How can a company respond to changing AI regulations?

To avoid the reputational risk, a successful and responsible AI team should be aware of every local and global regulation, dropping like a tick-tock video with a moment's notice. And noncompliance may ultimately cost an organization millions of dollars in fines around the work like the proposed EU AI Act. It might be up to 30 million Euros or 6% of the company's global revenue – OUCH.

Well, not everything has to be rosy at the get-go…as long as we know how to give a makeover from a scary to a rosy situation.

With no surprise… it's always people, process, and technology. So first, create a cross-functional governing body to educate, direct, and monitor the initiatives based on the objectives.  Then benchmark current AI technology and processes, understand the gaps, then remediate to future proof. Then fall back on a set of automated governance workflows in line with compliance requirements. Finally, set up a monitoring system to alert owners if the acceptable threshold is closing in. From the Technology side, a well-architected, well-executed, and well-connected AI requires multiple building blocks. And make sure it has some or all capabilities:

·     Data integrity across diverse deployments

·     Use open, flexible existing tools that adhere to AI Governance

·     Make sure to offer self-service access with privacy controls – a way to track

·     Design with automation and AI governance in mind

·     Can connect and be customizable for multiple stakeholders through customizable workflow

Once we give a makeover of the app from scary to Rosy…then the next question is how you prove…

First, fall back on the company's AI principles – build with them, and yet you still need to "show" that you are compliant, especially in regulated environments like Financial Services. Since Financial services must complain with NIST 800-53, they could look at the NIST AI Risk Management Framework ( AI RMF). NIST suggested the controls in four families – Govern, Map, Measure, and Manage. Using that as a guiding factor and stress test the applications to identify the gaps to remediate and monitor.

IBM can validate your models before you put them into production, and can be monitored for fairness, quality, and drift. It can also provide documentation explaining the model's behavior and predictions to satisfy regulators' and auditors' requirements. These explanations can provide visibility and ease the audit pain, and increase transparency and the ability to determine possible risks.

 Listen to those AI whispers with confidence!

#Financialservices #responsibleai #ethicalai #NISTAIRMF

 

 

7416

Comments: (0)

Member since

0

Location

0

More from member

This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services


See all

Now hiring