Long reads

Generative AI: Why data protection needs to be emphasised

Pilar Arzuaga

Pilar Arzuaga

Senior Associate, MWE

In recent years, the financial services sector has witnessed a significant technological revolution. Both fintechs and traditional financial institutions are shifting their focus from using artificial intelligence (AI) primarily for cost reduction to leveraging its capabilities for revenue generation. However, they adopt distinct AI strategies to accomplish this objective. Traditional financial institutions predominantly utilise AI to improve their existing products and services, while fintechs often employ it to develop innovative value propositions.

One of the most promising advancements in this field is the application of generative AI, such as OpenAI’s ChatGPT, powered by deep learning algorithms to generate fresh data samples by recognising patterns within existing data. It has shown tremendous potential to revolutionise various aspects of finance, including risk management, fraud detection, trading strategies, and customer experience.

However, as financial institutions harness the power of generative AI, it is vital to emphasise data protection considerations. In this article, we delve into the world of generative AI and explore how it is reshaping the financial landscape while safeguarding sensitive data.

What does generative AI mean for the financial sector?

The advancements of generative AI can bring about positive outcomes for consumers, firms, financial markets, and the overall economy. However, the adoption of this technology also introduces new challenges and magnifies existing risks. Consequently, there is an ongoing debate on how to regulate it to ensure it serves the best interests of all stakeholders.

The following paragraphs address the data protection implications of the use of generative AI in financial sector and examines how existing legal requirements and guidance apply to this technology. They also highlight the need for financial institutions to manage and mitigate the risks and harms associated with the use of the technology, safeguarding consumers, firms, and more broadly, the stability of the financial system.

Benefits and risks

The potential benefits and risks of generative AI in financial services can occur at different levels within the system (data, models, and governance). These drivers of risk can lead to various outcomes depending on how generative AI is used.

Mitigating risk drivers can unlock the benefits of generative AI, such as accurate outputs. However, the benefits and risks of generative AI depend on the specific context and purpose of its use in areas like consumer protection, safety and soundness, and financial stability.

Consumer protection and data protection

Consumer protection is a significant concern when it comes to the use of generative AI in financial services, according to the Financial Conduct Authority (FCA). Data is a crucial element of generative AI – from sourcing and creating datasets for training, testing, and validating, through to the continuous analysis of data once the system is deployed, the safe and responsible generative AI adoption in UK financial services is underpinned by high-quality data. It also means that data security and related issues such as data protection and individual’s privacy are ever more important to ensuring safe and responsible generative AI adoption.

Generative AI enhances customer experiences by analysing vast amounts of data to deliver personalised product recommendations, targeted marketing campaigns, and customised financial advice instantly, for example live chat boxes generating instant, accurate responses to customer inquiries. It can benefit consumers by catering to their specific needs in faster resolution times and identifying vulnerable demographics.

However, there is a risk of exploiting consumer biases and vulnerabilities, such as misuse of customers’ personal data, if generative AI is not used responsibly.

The recent interim decision of the Italian data protection authority, the Garante, explored several GDPR compliance concerns around the use of generative AI such as OpenAI’s ChatGPT, assessing transparency, whether the users anddata subjects are provided with the required information, and accuracy.

Where financial institutions use generative AI to process personal data, they have obligations under UK data protection law to handle the data with care and in accordance with the Information Commissioner's Office’s (ICO) guidance.

Certain practices may also breach the Principles or the FCA Consumer Duty, for instance where a firm did not present the way they would use customer data in a way that was clear, fair, and not misleading, and used their data in ways to which they had not consented and was potentially to their detriment.

In summary, while generative AI helps financial institutions to detect and prevent fraud, its impact on consumer protection depends on how it is used and for what purpose. Data protection must remain a priority with robust data security measures, encryption techniques, and anonymisation practices to protect sensitive financial information from unauthorised access. Implementing appropriate access controls and monitoring systems helps mitigate data breaches and maintain the integrity of customer data.

What is next?

While some major financial institutions swiftly imposed limitations on the use of OpenAI's ChatGPT by their employees in February 2023, others are eagerly embracing the potential of this technology. For example, certain organisations are leveraging OpenAI-powered chatbots to assist financial advisors, utilising internal research and data repositories. Discussions are underway for enterprise-wide licenses of ChatGPT, with potential applications including software development and information analysis.

Financial institutions must employ robust encryption methods such as privacy-preserving techniques like differential privacy, to create realistic and sophisticated financial data, and secure data during storage, transmission, and processing.

Generative AI holds immense potential for the financial services industry, enabling more accurate credit assessments, personalised customer experiences, advanced fraud detection, and improved investment management.

Whilst embracing these technologies, addressing challenges, and investing in research and talent, financial institutions should ensure there are robust data management systems in place and privacy-by-design principles to help combat the inherent risks of the technology in order to maintain customer trust and regulatory compliance as well as to stay competitive within the sector.

Comments: (0)