Long reads

Five Predictions for the use of AI in Fintech

Adam Lieberman

Adam Lieberman

Chief AI Officer, Finastra

In the year ahead, organisations across financial services will be turning to the technologies that can deliver the most value in a short amount of time. Inevitably, AI and machine learning, and a focus on harnessing data, will be key to bolstering business strategies and enabling new areas of growth. There are, however, immediate challenges that must be overcome following the disruption brought about by the coronavirus crisis, as well as key problems data scientists must seek to address to ensure AI continues to deliver on its promise. Here’s my five predictions for the year ahead:

Retraining models for the post-pandemic world: The pandemic has had a catastrophic effect on many businesses and individuals. Now, more than ever, access to finance is vital for so many. The viability of credit firms depends on the ability to lend. Many today employ models that automate decisions, but no financial model could have predicted the black swan event brought about by the Covid-19 pandemic. Existing models will require retraining and updating to reflect the current economic context, thereby ensuring sustainable lending.

Increased usage of synthetic data: Financial services firms have one eternal problem when it comes to data: there’s never enough. AI and machine learning models depend on large amounts of good quality data, but so much financial data is not readily usable, owing to data sensitivities and regulations. The solution that I am seeing take off at the moment is the use of synthetic data. Third-party organisations that specialise in creating synthetic data from real datasets are the beginning of a new data boom that will positively affect the maturity of models across the financial services ecosystem, improving data-driven decision-making and accelerating innovation.

Focus on eliminating bias in AI: Combatting inherent bias in AI and machine learning algorithms is becoming a key focus within the data science community. Although models may have been built with the best intentions, there is rightly increasing scrutiny on the ways in which they discriminate against people of different backgrounds. For example, might a model trained entirely on data of one particular gender or ethnicity exhibit biased qualities upon query of new genders or ethnicities? The answer is likely yes.

As a focus on fairness in AI decision-making becomes more prevalent, the need for greater transparency, and best practices around fairness, will increase. “Black box” models – those built without insight into the teams, data and methodologies used to build them – will become pariahs, with businesses shunning them in favour of more inclusive models.  

Rise of the application: Not all machine learning engineers are software engineers. They don’t necessarily know Javascript, HTML and CSS, which makes it difficult for them to build applications based around their models without the help of a larger team of engineers and UI/UX designers. Platforms that facilitate the application development process in conjunction with the machine learning models are beginning to emerge. We are seeing a trend in thought of how to take machine learning models to production and package them in applications to give end users the best and most fluent experience possible.

Importance of GPT-3 in quest for General AI: Any discussion of AI moving forward has to include GPT-3. The model gives us a glimpse into the possibility of General AI, but perhaps the most obvious use for the model is the automation of low-level software development tasks. This does not mean that jobs will become redundant, but rather that we will be able to expedite low level tasks. Spinning up a rudimentary webpage in a matter of seconds via a text request, for example, won’t eliminate the need for web developers, but it will allow them to begin their development journey at a more advanced stage.

Comments: (0)