Join the Community

21,751
Expert opinions
43,834
Total members
461
New members (last 30 days)
189
New opinions (last 30 days)
28,612
Total comments

AI risk modelling: Managing compliance and ethical considerations

Be the first to comment

As financial institutions increasingly integrate AI, they encounter the challenge of unlocking its potential while ensuring compliance with regulatory requirements and ethical standards.

In this post, we'll review regulatory expectations, share strategies for building transparent and fair AI models, and cover the importance of robust governance and validation practices.

 

A quick recap: The regulator’s stance on AI risk models

The UK's regulatory framework does not impose specific barriers on the use of AI in risk modelling. However, regulators such as the PRA and FCA emphasise that AI solutions must meet the same standards as traditional models, including transparency, accuracy, accountability and control.

Challenges of AI complexity

  • AI models are more sophisticated than traditional techniques like logistic regression

  • This complexity makes it harder to interpret and explain decision-making processes

  • There are greater social and regulatory concerns about bias with AI-based models

  • Regulators expect the same level of oversight, understanding and control as with simpler models

Key compliance requirements

To satisfy regulatory requirements, organisations need to:

  • Demonstrate a thorough understanding of their AI models

  • Establish robust governance frameworks

  • Retain complete oversight of AI-driven decisions

  • Provide clear explanations and justifications of model outcomes to both regulators and stakeholders

In essence, organisations must show they grasp how their AI models function, have solid governance in place, and maintain full control over the results generated by AI. Here's how they balance these demands with the push for innovation.

 

Steps for building compliant AI risk models

It’s one challenge to create AI models. It’s another to ensure compliance. That’s why implementing AI in risk modelling requires a comprehensive approach. Here are some of the key steps organisations should take:

Step 1: Maintain in-house expertise

Organisations cannot delegate responsibility for AI models. Instead, they must:

  • Develop core expertise in AI model building, implementation, and monitoring

  • Understand models' strengths and weaknesses within their operational environment

  • Be prepared for additional resource overhead when adopting AI-based approaches

Step 2: Create cross-disciplinary collaboration

  • Ensure business and regulatory experts work closely with technical experts

  • Create common management structures with shared objectives and responsibilities

  • Where possible, employ cross-disciplinary experts with both business and technical expertise

Step 3: Develop AI-specific standards

  • Create codes of practice detailing legal and ethical requirements for AI solutions

  • Establish rules and constraints that must be adhered to

  • These may be similar to, or modified versions of, existing model development standards

Step 4: Integrating AI into risk frameworks

  • Include model risk alongside other types of risk in risk appetite statements

Step 5: Conduct independent model validation

  • Ensure models are independently reviewed by experienced developers

  • Recognise that regulators and auditors require evidence of independent validation before model’s are deemed fit for purpose

  • Address the challenge of finding validators with the necessary blend of AI skills, industry knowledge, and regulatory expertise

On top of this, we recommend implementing these best practices:

💡Ensure good practice in variable selection: Use only high-quality data with clear provenance, stability, and guaranteed future availability. Plus, include only fully understood data in solutions.

💡Apply robust variable reduction: Reduce data inputs to improve model explainability and remove highly correlated variables to avoid ambiguities.

💡Include business-sensible data relationships: Prioritise data items that display sensible relationships at a univariate level.

💡Implement appropriate model interrogation methods: Use tools that can explain model outputs at both portfolio and individual case levels.

Now, we’ve covered some best practices but we want to delve into ethics a little deeper.

 

Spotlight: Tackling potential bias

The key to mitigating bias is to design the model correctly from the outset, rather than trying to review or adjust it post-creation. That said, it's essential to check for bias after the model is built and correct it if necessary.

Of course, it is almost impossible to exclude all bias due to the inherent bias found in society. Therefore, suitable outcome analysis should be performed to ensure that the bias is not unreasonable/unfair. In particular, this means ensuring that model outputs are accurate for given groups, even if that means that some groups are treated differently.

For example: The gender pay gap is a feature of society that should not exist, but unfortunately does. All other things being equal men, on average, are granted greater amounts of credit than women because they earn more. The best solution is to address the pay gap directly. However, while progress is gradually being made it will be many years before the gap is fully closed. Therefore, financial institutions need to find a way to manage this problem while also treating everyone fairly.

So, in this example, one argument is that "treating fairly" means ensuring that on average, men and women with the same salary receive the same amount of credit—but there are several other approaches that could also be adopted.

It is also very important that the data samples used to build AI solutions are as representative as possible—i.e. if certain groups are underrepresented in the data samples used to build AI solutions then the resulting models will not be as accurate for those groups. If under-representation exists, then the data sample can be adjusted (weighted) to provide more equal representation.

 

The future of AI in risk modelling 

At present, most large language models (like ChatGPT) lack the capabilities required for specialised tasks such as building or validating risk models. However, these technologies are advancing quickly. It’s entirely possible that in the coming years, specialised versions could be developed, capable of handling key aspects of both model creation and validation.

Challenges with generative AI and LLMs

To add to this, a critical challenge lies in the data and modelling approaches used for generative AI and Large Language Model (LLM) applications. This data and how it is modelled is typically not accessible to end users, creating significant hurdles:

  1. Regulatory compliance: Firms struggle to meet their regulatory obligations for model risk, particularly when assessing the appropriateness, completeness, and quality of the underlying data.

  2. Bias identification: The lack of transparency makes it nearly impossible to identify the root causes of bias in model outcomes.

  3. Operational risk. Any organisation that embeds externally supplied models within business critical systems needs an extremely high level of assurance of the ongoing performance, availability and cost of those models.

These issues underscore the importance of careful consideration and regulatory guidance as AI technologies continue to advance in credit risk.

 

Key takeaways: Balancing ethics, compliance and innovation in AI risk modelling

As AI continues to transform risk assessment in financial services, organisations must strike a delicate balance between innovation and compliance. The journey towards AI-driven risk modelling is complex, but with the right approach, it can yield significant benefits.

Key takeaways:

  • Regulatory alignment: UK regulators expect AI-based models to meet the same stringent standards as traditional approaches.

  • In-house expertise: Organisations cannot outsource responsibility for AI models. Developing and maintaining internal expertise and model understanding is crucial for compliance and effective model management.

  • Cross-disciplinary collaboration: Successful AI implementation requires seamless cooperation between business, regulatory, and technical experts.

  • Ethical considerations: Addressing bias in AI models requires proactive design, careful data selection, and continuous monitoring.

  • Transparency and explainability: Despite their complexity, AI models must be interpretable and their decisions explainable to both regulators and stakeholders.

  • Future challenges: As AI technologies like large language models evolve, new challenges in data transparency and regulatory compliance will emerge.

  • Continuous adaptation: The rapidly evolving nature of AI technology and the regulatory landscape demands that organisations remain agile and continuously refine their approaches.

As we look to the future, the key to success will be remaining vigilant, adaptive, and committed to responsible AI implementation.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,751
Expert opinions
43,834
Total members
461
New members (last 30 days)
189
New opinions (last 30 days)
28,612
Total comments

Trending

Dirk Emminger

Dirk Emminger Managing Director at knowing finance

Competition and Cooperation: In an AI-Dominated World (A2)

Sireesh Patnaik

Sireesh Patnaik Chief Product and Technology Officer (CPTO) at Pennant Technologies

Empowering the Lending Industry: How Low-Code, No-Code, Pro-Code Platforms are Driving Innovation

Now Hiring