Community
Everyone's talking about AI in finance, faster onboarding, automated trading, intelligent insights. But there's a ticking time bomb that most institutions are ignoring: governance.
Here's the reality: AI isn't some experimental toy anymore. Banks, broker-dealers, insurers, and fintechs are using it right now for fraud detection, onboarding, surveillance, and wealth management. The problem? Regulators have woken up. And they're moving fast.
If you think you can bolt on compliance later, you're in for a surprise.
Picture this: A bank's AI credit scoring system gets caught systematically rejecting small business loans in certain postal codes. Not because anyone programmed it to discriminate, the AI "learned" this pattern. The investigation alone could run into millions, and that's before the headlines hit.
This isn't science fiction. It's exactly the scenario regulators are preparing for.
The EU AI Act enters into force in August 2024, with most obligations applying from August 2026. However, key provisions such as bans on prohibited AI practices begin in early 2025. Credit scoring? High-risk. Biometric identification? High-risk. The requirements are brutal: full transparency, data quality controls, human oversight, and the works.
In the U.S., former SEC Chair Gary Gensler raised early alarms about AI-driven conflicts of interest and misleading AI claims. A proposed rule would require firms to identify and mitigate AI-induced conflicts of interest.
Since early 2025, the SEC's posture has softened. Paul Atkins was confirmed as Chair, and Acting Chair Mark Uyeda emphasized a more collaborative, technology-neutral approach during the SEC’s March 2025 AI Roundtable.
The UK? Same story. The FCA and Bank of England want to see inside your black boxes, especially if they're making decisions about retail banking or trading.
I keep hearing the same thing: "We'll add governance once we prove the ROI."
Here's what actually happens:
Retroactive compliance is a nightmare. Trying to document why your AI made a decision six months ago when you never tracked model versions is expensive.
"Trust us" doesn't work anymore. When regulators ask why your AI flagged that transaction or rejected that loan application, shrugging isn't an option. Cynthia Rudin was right, we need to stop hiding behind black boxes in high-stakes decisions.
Your old model risk framework is useless. Traditional model risk management was built for static models, not AI that retrains itself daily. I've watched teams realize this the hard way when their trading algorithms evolved beyond their control frameworks.
Great pilot, dead project. Nothing kills innovation faster than hitting 95% accuracy in testing, then realizing you can't deploy because you can't document how it works.
Forget the buzzwords. Here's what works:
Not all AI is created equal. Your chatbot? Low risk. Your credit decisioning algorithm? That's a different story. Map out which systems could land you in regulatory hot water:
Anything touching credit or lending
Trading algorithms that move markets
Customer-facing investment advice
AML and KYC automation
Track your training data, model changes, retraining cycles, deployment history. This isn't bureaucracy, it's being prudent when someone asks why your AI did something weird last quarter (Holzinger et al., 2022).
Use simple models where transparency matters. Save the fancy neural networks for lower-stakes stuff. If you're determining someone's mortgage rate, a decision tree beats a black box every time.
Run fairness audits regularly. Document what you find. Fix what's broken. The alternative? Explaining to regulators why your AI accidentally redlined entire neighborhoods.
If your AI vendor can't explain their model, you've got a problem. Their black box becomes your liability.
The best governance isn't just tech and risk people talking to each other. Bring in legal, compliance, business, even ethics experts if you're serious about getting ahead of problems.
Companies that get governance right don't just avoid fines. They win.
They ship faster because they don't hit compliance walls. They earn trust because they can explain their decisions. They shape the rules instead of scrambling to follow them.
Think about what happened with cybersecurity. The banks that invested early didn't just avoid breaches, they became the trusted ones. The same thing's happening with AI.
Look, AI regulation isn't coming, it's here. The EU AI Act, SEC rules, FCA guidance, they're all landing now or next year.
You've got two choices: Build governance now while you can do it right, or scramble to retrofit it later when regulators come knocking.
Here's a suggested to-do list:
List every AI system you're running and flag the high-risk ones
Check if your model risk framework can handle self-learning systems
Figure out who owns AI governance, if the answer is nobody, you've got a problem
Start documenting your riskiest AI use cases right now
Call your regulator, seriously, they want to hear your approach before they have to enforce it
The institutions that do this now won't just survive the regulatory wave. They'll surf it.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Raktim Singh Senior Industry Principal at Infosys
04 August
03 August
Luigi Wewege President at Caye International Bank
02 August
John Adam Chief Revenue Officer at Aimprosoft
01 August
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.