Blog article
See all stories »

Refining regulation: taking a top-down approach to data management

Given that for many Global Systematically Important banks (GSIBs) the initial deadline for BCBS239 compliance passed on 1st January 2016, it’s important that they and other financial institutions adopt a top-down approach to both ensure and maintain compliance. Banks need to ensure their regulation protocols can be applied to any data governance requirement as well as being specialised for any specific use case like BCBS239.

This is not an easy task, however. Banks face a host of obstacles in succeeding in efficient data governance. Most have a high degree of organisational and operational complexity to navigate, including siloed operations and a large number of data-generating applications. On top of that, there are often few agreed definitions for key data entities (KDEs) across a bank, leading to limited visibility of the data pipeline across the enterprise. As a result, data quality checks are frequently performed in silos with manual interventions or are tackled as a project where companies throw time, money and human resources at the problem but fail to establish an effective long-term protocol.

In turn, this creates an overall picture of inefficiency. When KDEs – the types of data that are essential to the business – are poorly defined and randomly spread across the enterprise, it becomes far more challenging to effectively manage them and comply with regulations.

Five key design principles

To overcome this problem, its useful to examine the top five key design principles in data management.

Many data governance programmes struggle to reconcile their theoretical definitions of KDEs with the way they are actually realised in applications and systems. By starting with umbrella policy definitions and then implementing them all the way down to the real layers of data, organisations can gain greater insight into the location, security and risk of their sensitive data and improve the way their data management programmes perform.

1. Use a top-down approach to discover and document the information landscape across the enterprise

A top-down approach starts with looking at what data and information is logically generated and consumed across the enterprise and clarifying what is critical at the enterprise and line-of-business level. Information from major data sources, key applications and high level business data models should all be captured. Once consensus is reached on which data entities are critical, they can easily be linked to their actual realisations – like servers, email and live applications.

2. Deepen KDE definitions and standardise flows

Once KDEs have been identified and tied to actual instances, more detail can be added to how they are defined. Extra definitions could include:

  • Type: for example, enterprise critical and line-of-business critical
  • Scope: for example, business unit, geography and product type
  • Domain: for example, risk, finance and product

As well as providing a granular definition, it is important to consider how KDEs flow across the information landscape. It’s crucial to standardise this flow in order to ensure consistency in how KDE attributes are defined – if KDEs flow through hundreds of different channels across the enterprise it will be much harder to track and secure them.

3. Discover, define and standardise business and quality rules associated with KDEs

Once KDEs are defined and their flow can be monitored, it is important to address their ‘business and quality rules’. In essence, these are protocols used to define how KDEs are consumed.

Good insight into KDE flow is particularly helpful for the definition of business and quality rules, as it will help banks understand where they need to apply checks on completeness, consistency or integrity across system boundaries.

Rules will range from the simple (a specific data field must have a value greater than 0) to the complex (the client ID field cannot be empty and the range of the value in that field has a specific meaning). As such, standardisation is equally important in this area in order to ensure consistency and to enable reuse.

4. Link business models to physical models

The business model built so far should be an effective representation of all enterprise systems and how KDEs flow through them. In order to govern actual data, however, the logical world of the business model should be anchored to the physical, technical world. Metadata describing physical data stores and technical data flows can be easily imported to help banks identify the systems and endpoints most at risk of attack and data loss.

5. Set data quality controls at key architecture points

Banks can then use all the above insight to strategically apply data quality controls at key strategic points in their architecture. This approach of lineage tracking and measurement exposes what information is actually flowing around an organisation as well as detailing its accuracy, completeness, integrity and timeliness.

The benefits of a top-down approach

As a result of this approach to data management, business users have much deeper insight into what’s really happening to data. Metadata becomes available for analysis and the current state of data is constantly monitored, with alerts generated when metrics exceed specific limits. If the outflow of account information exceeds normal use rates, for example, the system could flag it up to controllers as a potential data leak. It also creates a repeatable foundation to support the next regulatory compliance requirement.

As more organisations look to simplify data governance processes, a top-down approach is the optimal way to move forward, enabling banks to rapidly demonstrate compliance with minimal time, cost and effort.

 

 

6760

Comments: (0)

Now hiring