Blog article
See all stories »

Digital banking through effective data supply chain management

If we think carefully, Data supply chain starts from data creation and ends when value added services or product is created from raw data. This largely happens as it transfers hands & get enriched through interaction with customer, business, Operations, and IT. Ever maturing digital world is creating challenges for organizations due to high velocity, volume and lack of traceability of data. Deriving insight from data is becoming challenging. It is becoming even more challenging to predict anything.

Data silos and multiple system of records are still very prominent in many large organizations. It is causing big issues around data quality. Variety of data is also increasing day by day due to fast adoption of Mobile, social media, wearable and IOT. Data is no longer structured, it includes image, video, voice, social media stream, blogs, documents, XML, PDF, emails etc.

Financial services organizations are surely taking notice of the shift and driving initiatives across payments, business process management, digital marketing, sales, risk management, trading, portfolio, investment management, market/credit risk, fraud, liquidity management, surveillance etc areas.

Focusing on data management process and governance will help FS organizations better personalize & target, forecast, financial model. Traceability of data and models will help withstand the regulatory audit and supervisor’s scrutiny. Being able to show data lineage all the way back to source systems confirms that FS organization is working off a single source of truth, as well as eliminates the possibly of using conflicting, inconsistent data. Single view of data helps in effective personalization and targeting to existing or new customers. It also helps in consistent user experience & customer services across channels.

So, in this ever changing digital paradigm entire data supply chain is going through big disruption. Below is one way to look at different stages of data supply chain which is incrementally converting data into business assets.

Stage-1: Create system of records

Most of the organization have ways to manage data once it enters the organization ecosystem in different buckets like:

  • Client & Product Portfolio - Lead, customer Relationship and complaint, interaction, product & portfolio management
  • Product & Support  - Current, savings, deposit accounts, personal & home loan, Credit card, equity, derivatives, money market and commodity etc
  • Corporate Functions - HR, Payroll, Legal etc

Many are custom written legacy applications and exist across LOBs and product, different geo employees. But, industry is gradually moving towards consolidation & standardization.

Stage-2: Standardize & improve data quality

ETL tools are getting used to transform raw data into standard formats. Most leading ETL vendors are integrating with Hadoop to offload large amounts of data to MDM and warehouses. ETL tools also provide change data capture capabilities to support real-time, operational BI requirements. ETL is also extensively used by old legacy systems to archive repositories or data warehouses to support purposes like reporting and compliance.

Organization will continue to invest in this space to improve data quality and enforce standardization. Product vendors will continue mature the products to enable real time handling of  structured, unstructured, or semistructured data.

 Stage-3: Create single view of truth

Traditionally MDM products are enabling creation of single view of Customer, Products, employee etc.  Adoption of MDM some times are driven by regulatory reasons and most of the time by cost optimization or revenue/profitability enhancement. MDM comes with capabilities like data modelling, information quality management, workflow, data governance and Stewardship. Lot of work is in progress to see how MDM can be enriched by getting the information from big-data sources like social media activity streams etc.

According to GARTNER, growing number of firms are looking to expand their MDM program, perhaps by adding content, additional application-specific data, or other data domains. MDM is often seen as the starting point for a broader EIM program within an organization.

 Stage-4: Data lake -Decision Support System

There are 2 distinct data domains emerged in last few years. Structured data driven decision support system are called Data marts (warehouse). Unstructured data driven decision support system known as big-data (Hadoop cluster). 

We are seeing a convergence of traditional Data Warehouse & Big-data-Hadoop. Vendors are beginning process combination of unstructured (that is, unfamiliar schemas), structured (that is, familiar and well understood schemas) and semi-structured (such as XML) with a variety of solutions such as batch computer (MapReduce), interactive SQL and text search etc. vendors also begin to use alternative file systems for data storage like HDFS as a storage platform.

Stage-5: Data discovery, insights & real-time recommendation and Predictive analysis

With the advancement in CEP, in-memory computing, distributed cache, reporting, NL processing, machine learning, cognitive intelligence and video, voice, image, text search & advanced statistical modelling capabilities, we are beginning to drive real time data discovery & correlation management, insight & real-time decision making, forecasting & predictions.

More and more BI vendors are providing embedded advanced analytics using things like Predictive Model Markup Language (PMML) and R-based models to create advanced analytic visualizations. It also support hybrid, columnar and array-based data sources, such as MapReduce and other NoSQL databases (graph databases, for example). Support could include direct Hadoop Distributed File System (HDFS) query or access to MapReduce through Hive. They are also looking to provide voice based data discovery (Q&A) capabilities.

Many vendors are providing Predictive analysis and recommendation engine, for example Opera Solutions creates predictive models that focus on specific business outcomes. Teradata’s Aster provides a big data predictive analytics capability that allows developers to use SQL and MapReduce together to perform sophisticated analysis on very large data sets. Pega, Alteryx and Pentaho also includes embedded predictive analytics features.

Each of the above mentioned stages are fast evolving. Which is helping organizations to convert data into business assets, just like raw material is converted into value added services or product in a progressive supply chain ecosystem!!!


Comments: (0)

Abhishek Chatterjee

Abhishek Chatterjee

Managing Partner

Gartner Inc.

Member since

16 Dec 2014



Blog posts




This post is from a series of posts in the group:

Financial Services Regulation

This network is for financial professionals interested in staying up to date on financial services regulation happening anywhere in the world. CFOs, bankers, fund managers, treasurers welcome.

See all

Now hiring