Blog article
See all stories »

Large Language Models are not a solution for precise data extraction in banking.

In recent times, large language models (LLMs) have revolutionized the field of natural language processing, demonstrating impressive capabilities in understanding and generating human-like text. However, when it comes to sensitive and complex operations like those within the banking sector, there are valid concerns about relying on LLMs for the extraction of exact data from documents. While LLMs have their merits, the intricacies of banking operations demand a higher level of accuracy and precision that these models might struggle to consistently provide. This has given rise to concerns about their reliability and accuracy. One particular phenomenon is AI hallucinations, especially in the context of data extraction models. In this article I have tried to explain the challenges while at the same time I am open to be challenged and proven wrong. However till LLMs are tested to be accurate beyond doubt in extracting accurate data, the burden of proof rests on people who are advocating their use at scale.


To be able to appreciate this concern, once must keep in mind that LLMs work on the principle of creating the next string based on a model output that has learnt the language and logic of conceiving answers when prompted to do so. And this is not equivalent to extracting the exact data. Lack of precision is by design in large language models, as it works by predicting the most probable next word in a sequence based on patterns learned from training data. And in the context of banking, where accuracy is paramount, even a minor deviation from the exact data could lead to substantial financial and legal consequences. LLMs' inherent tendency to prioritize fluency and coherence over exactness in data generation could lead to incorrect data extraction, causing severe operational errors.

Metaphorically - Using Generative AI for precise data extraction is like sending a creative artist to paint a meticulously detailed map. While the artist might produce a masterpiece full of imagination and flair, relying on them for accurate cartography could lead to distorted landscapes and missing landmarks. Similarly, generative AI, with its creative prowess, might generate text that is eloquent and captivating, but when tasked with extracting exact data, its tendency to extrapolate and interpret could result in inaccuracies and misrepresentations. Just as an artist might struggle with the precision of cartography, AI's penchant for creativity can pose a risk when applied to tasks demanding factual and precise information.


It is interesting to get reminded that, dealing with logic, there are 3 distinct scenarios. "possible," "plausible," and "probable".

  1. Possible: Something is considered possible if it can exist or occur within the realm of logical or physical constraints. It implies that there is no inherent contradiction or violation of established principles.

  2. Plausible: Plausibility refers to the degree of believability or reasonableness of a statement or idea. If something is plausible, it's likely to be accepted as true or valid based on the available information, but it may not necessarily be proven or confirmed.

  3. Probable: Probability signifies the likelihood or chance that an event will occur or be true. It involves assessing the relative likelihood of different outcomes based on evidence or reasoning. An event that is probable is likely to occur, but it doesn't guarantee certainty.

In the context of large language models, these terms help define the strength of statements, predictions, or responses generated by the model, indicating the level of confidence or credibility associated with them. Let me make an attempt to use these three terms, in the context of data extraction use cases using forms of AI including Large Language Models (LLMs):

  1. Possible: In data extraction using AI , 'possible' refers to information that can be theoretically extracted from a given text or dataset without violating any rules or constraints.

  2. Plausible: If data extraction involves making educated guesses based on 'plausible' contexts. This means that AI might suggest certain data points that seem reasonable, even if they're not explicitly stated.

  3. Probable: When dealing with data extraction using AI, "probable" relates to the likelihood that certain data points are accurately extracted based on patterns observed in the training data, if the model has consistently extracted specific information from similar contexts in the past.

While large language models have showcased their prowess in various language-related tasks, they might not be the ideal solution for complex banking operations that necessitate the extraction of exact data from documents. The potential risks and consequences, including errors in precision, regulatory violations, legal liabilities, data security breaches, and inconsistency, outweigh the benefits originate from an anomaly popularly known as AI hallucinations. In other words, LLMs might generate plausible or probable information that doesn't actually exist in the documents, which can lead to inaccurate data extraction, that is critical in banking where precision is essential for financial calculations, compliance, and decision-making.

AI hallucinations refer to instances where language models, generate outputs that seem plausible but are ultimately incorrect or nonsensical. These outputs are typically the result of the model's overreliance on patterns it learned during training, even when those patterns do not fit the context or are statistically improbable. This poses the following significant challenges to the reliability and trustworthiness of LLMs in data extraction in the context of banking processes.

  • Complexity of Banking Documents: Banking documents often contain dense, highly specialized information, legal jargon, and intricate numerical data. Extracting specific information accurately requires not only mandates picking the exact data, also comprehending the domain-specific nuances. LLMs, while impressive in their language comprehension, might struggle to grasp the full depth of complex financial documents, leading to misinterpretations that can adversely impact crucial decisions.

  • Regulatory Compliance and Legal Ramifications: Banking operations are subject to strict regulatory frameworks designed to ensure transparency, security, and fairness. Accurate data extraction is crucial for compliance with regulations such as Anti-Money Laundering (AML) and Know Your Customer (KYC). Relying on LLMs for this task could result in incomplete or inaccurate extractions, exposing financial institutions to regulatory fines and legal liabilities.

  • Inconsistency and Reliability: LLMs generate outputs based on probabilistic patterns, which means they can sometimes provide inconsistent results. In the context of banking operations, where accuracy and consistency are non-negotiable, relying on LLMs introduces an element of unpredictability that can erode trust in the system.

  • Dependency on Training Data: LLMs are trained on vast datasets from the internet, which may not perfectly mirror the intricate data structures and language used in banking documents. The mismatch between training data and the domain-specific content of banking documents can lead to suboptimal performance and errors.

Metaphorically again - Using generative AI for exact data extraction is like employing a novelist with creative writing ability to compile a scientific encyclopedia. While the novelist might excel at creating compelling narratives and rich characters, the meticulous arrangement of empirical facts could become muddled in the world of fiction. Similarly, generative AI's talent for imaginative language might result in engaging prose, but when tasked with extracting precise data, its propensity to elaborate and interpret could introduce inaccuracies and misrepresentations. Just as a novelist might inadvertently inject fictional elements into an encyclopedia, AI's creative tendencies pose a risk when dealing with tasks that demand rigorously accurate and unambiguous information.


In summary - Imagine a Trade Entry Module, that has so far relied on a user extracting the exact instrument data (like instrument id), market data (like price), counterparty data (like beneficiary id) and trade data (like qty) from a contract note to manually enter into the system. Inefficient no doubt, because its manual. However now imagine automating the flow with a language model that has memorized from thousands of contract notes fed from the past and present. And which, at the point of prompt, does'nt go back to extract the exact data but relies on its learning capability to create the string of instrument id, price, beneficiary id etc. Well this may be the future when AI maturity can be trusted beyond doubt. However as i said, until LLMs are thoroughly tested and proven to be accurate, the responsibility to provide evidence and establish their credibility falls upon those advocating their use.


LLMs might generate plausible - sounding information that doesn't actually exist in the document. This can lead to inaccurate data extraction, which is critical in banking where precision is essential for financial calculations, compliance, and decision-making.

Instead, a more prudent approach would involve leveraging specialized tools and systems designed specifically for banking operations, ensuring the highest level of accuracy, reliability, and compliance.



Comments: (1)

Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune 26 October, 2023, 12:43Be the first to give this comment the thumbs up 0 likes

While data extracted from any given system in banking is arguably precise, data in banking industry goes through several systems across several companies, and what is received by the receiver most often does not match what is sent by the sender.

Ergo bills and statements are often indecipherable. More in the posts titled Bills And Statements Are Hard To Decipher and Taking Readability Of Bills And Statements To Next Level. (hyperlinks to posts on my company website removed to comply with Finextra Community Rules but these posts should appear on top of Google Search results when searched by their title + "GTM360").

I once saw this narration for a credit entry on my bank statement: EBA/EBA/EQPEAKMGN//20220124184357. Looks very coded and precise and all, but even my bank couldn't explain this transaction or even tell me the name of the payor who transferred this money to my account. 

In my second post, I actually argued that AI / ML may solve this problem!

Prasoon Mukherjee

Prasoon Mukherjee

Director | Head of Securities Services | GSC-India

Societe Generale Bank

Member since

13 Aug 2018



Blog posts




This post is from a series of posts in the group:

Post-Trade Forum

The Post Trade Forum's aim is to propagate debate and discussion between senior practitioners in Post Trade Operations in the global securities market; to bring about increased awareness and knowledge across both buy-side and sell-side financial institutions in financial products and be a focal point for firms and practitioners to air views.

See all

Now hiring