Artificial intelligence (AI) and machine learning (ML) promise a smarter, more automated future for everyone. But the algorithms that underpin these technologies are at risk of bias, a substantial threat that could undermine their entire purpose.
What exactly is AI bias, why is it important for financial services (FS) firms to avoid it and how can you detect and treat bias in the algorithms your organization uses?
What is AI bias, and why is it important?
While ML and AI are technologies often dissociated from human thinking, they are always based on algorithms created by humans. And like anything created by humans, these algorithms are prone to incorporating the biases of their creators.
Because AI algorithms learn from data, any historical partiality in your organization’s data can quickly create biased AI that bases decisions on unfair datasets.
These biases can take a number of forms. In this case, we are talking specifically about the unfair treatment of individuals who are part of a protected group, such as a particular race or gender.
For example, AI has been widely used to assess standardized testing in the US, and recent studies suggest that it could yield unfavorable results for certain demographic
groups. It also plays a deciding role in hiring decisions, with up to 72% of resumes in the US never being viewed by a human. And famously, Google’s
photo recognition AI led to Black people being misidentified as primates.
Non-discrimination is an important goal for any algorithm. But the needs for fair, bias-free AI and machine learning go further for businesses. That’s because biased algorithms can result in AI that makes costly mistakes, reduces customer satisfaction and
ultimately damages a brand’s reputation.
Fairness is just one of four key pillars that support ethical AI. In this article, we’ll examine AI fairness, why it’s important in FS and how organizations can build fairness into their algorithms.
Fairness: why it’s essential for financial services
FS firms are using AI for a wide range of operations and customer journeys. For instance, it is already being used to decide mortgage, savings and student loan rates; the outcomes of credit card and loan applications; and insurance policy terms. It affects
other outcomes, such as credit card fraud prediction. AI is also used for virtual assistants that help customers improve their financial health.
This is just the start.
If the algorithms used in these financial decisions are subject to bias, they could negatively impact the way millions of consumers and businesses borrow, save and manage their money.
It’s important to remember that this isn’t just a hypothetical risk. As recently as 2017, data from
the Home Mortgage Disclosure Act showed that:
- 10.1 percent of Asian applicants were denied a conventional loan. By comparison, just 7.9 percent of white applicants were denied.
- 19.3 percent of Black borrowers and 13.5 percent of Hispanic borrowers were turned down for a conventional loan.
While it’s important not to draw conclusions based solely on these figures, it’s clear that the loan denial rates for some ethnic groups are far higher than the average denial rate of 9.6 percent.
Crucially, these figures represent decisions made without the use of AI. However, if AI was then introduced into these financial institutions and learned from these mortgage application decisions, it would be likely to unintentionally learn to replicate
Whether or not bias was at play here, it’s important for financial institutions to ensure any AI underpinning crucial loan decisions such as these is fair and free from prejudice.
How do you ensure fairness in financial services AI?
Any AI algorithm can have bias creep into it. The best way to avoid this is to proactively look for and identify bias in your AI, eradicate it, then alter your approach to ensure future algorithms are fairer.
Posing the following questions can help you check for systematic bias in your data:
- Are any particular groups suffering from systematic data error or ignorance?
- Have you intentionally or unintentionally ignored any group?
- Are all groups represented proportionally, e.g. when it comes to the protected feature of race, are all races being identified or merely one or two?
- Do you have enough features to explain minority groups?
- Are you sure you aren’t using or creating features that are tainted?
- Have you considered stereotyping features?
- Are you models apt for the underlined use case?
- Is your model accuracy similar for all groups?
- Are you sure that your predictions are not skewed towards certain groups?
- Are you optimizing all required metrics and not just those that suit the business?
Five steps to detect bias
By carefully altering the way different demographic groups are assigned to protected or sensitive classes, and ensuring these groups have equal predictive values and equality across false positive and false negative rates, you can better detect bias in your
AI. These five steps can help you detect bias in your algorithms:
- Ensure all data groups have an equal probability of being assigned to the favorable outcome for a protected/sensitive class.
- Ensure all groups of a protected/sensitive class have equal positive predictive value.
- Ensure all groups of a protected/sensitive class have predictive equality for false positive and false negative rates.
- Maintain an equalized odds ratio, opportunity ratio and treatment equality.
- Minimize the average odds difference and error rate difference.
Publicis Sapient partners with a wide variety of clients to ensure proper bias detection in their AI. For more information on the five steps outlined above, please
get in touch.
Eradicating bias before, during and after modelling
Monitoring for bias before, during and after modelling, can increase the fairness of your algorithms.
- Before modelling: Ensure fairness in input data and equal proportional representation across all groups.
- During modelling: Ensure model performance is fair for all groups for one or more protected or sensitive features. For instance, AUC score or precision or true positive rate should be similar for male and female both, and not more for male
and less for female given a model.
- After modelling: Ensure predictions and model output maintain an equal probability of false positive or false negative (or any other model performance metrics) for all groups with one or more protected/sensitive features.
Once you’ve detected bias in any and all stages of the data science lifecycle, it’s worth looking at how to change your approach to algorithms to help ensure fairness in your overall modeling approach.
You can do this in five ways:
- Use statistical calibration: Leverage various statistical techniques to resample or reweigh data to reduce bias
- Use a regularizer: Add a fairness regularizer (a mathematical constraint to ensure fairness in the model) to existing ML algorithms
- Use surrogate models: Wrap a fair algorithm around baseline ML algorithms already in use
- Use fair machine learning models: Adopt completely new ML algorithms that ensure fair outcomes
- Calibrate the threshold: Calibrate the prediction probability threshold to maintain fair outcomes for all groups with protected and sensitive features
Advancing ethics along with AI
Financial services firms have a responsibility to do right by their customers—regardless of who they are. That means, as more firms use AI to improve their services, they will also need to ensure their algorithms are fair, ethical and transparent.
In this blog, we’ve covered ways to ensure AI fairness at a very high level, but there’s much more that goes into fully removing bias from algorithms.
Please do leave your thoughts on this issue in the comments section.
If you found this post interesting, it would be great if you hit the ‘like’ button, or feel free to share with your colleagues.
By Sray Agarwal,
AI and ML specialist at Publicis Sapient, and co-authored by Rashed Haq and Rodney
Coutinho. Sray is spearheading ML and AI initiatives for various clients and is an evangelist for Safe AI.