Join the Community

22,103
Expert opinions
44,093
Total members
404
New members (last 30 days)
189
New opinions (last 30 days)
28,704
Total comments

Managing sensitive data risk and compliance in non-production environments

When protecting confidential customer data in the financial services sector, one of the most significant risk areas is exposing sensitive information governed by regulations, such as the GDPR and PCI. Much of this -such as a customer’s Personally Identifiable Information (PII) — ends up in non-production environments, such as development, testing, analytics and AI/ML. Many businesses do not have in place protections on non-production environments as they would in production environments, and this is a grave risk. Fortunately, there are steps that financial services organisations can take, but first, why is so much sensitive data sprawling from production into non-production environments? 

Explosion of sensitive data in non-production environments 

Businesses are rapidly prototyping, experimenting, and developing AI/ML models and applications and need consumer data to feed these projects. Add in factors like digital transformation, increased digital interactions with customers, increased use of data to support decision-making, and cloud adoption, and you have extensive software development that creates data sprawl from production to non-production environment. 

Failure to safeguard this data can lead to compliance and audit issues, data corruption or alteration, data breaches or theft. However, protecting sensitive data in non-production environments can be difficult. The ability to track and comply with ever-changing and growing regulations is part of the problem. Developers or testers also need access to realistic sensitive data to do their jobs. One way to do that might be to hide certain fields, but when the team goes to test, the data is no longer production-like, and testing fails. Also, the complex relationship between interdependent data sets must be maintained. A mismatch can lead to teams working with unrealistic data, which in turn leads to more defects in production.

Concerns around slowing development

In some organisations, there is also a perception that protecting sensitive data in non-production environments will hinder development speed because manually anonymizing and replicating production databases in non-production environments can take weeks. Furthermore, as data estates grow in size and complexity, there may come a point when attempting to protect huge data sets, using sub-optimal methods, could potentially bring software development to a halt. Sensitive data can also be hard to find, hidden in various databases, formats, applications and other sources. For all these reasons, it can be tempting to allow data compliance exceptions, which is a dangerous strategy because it could open the door to data breaches, theft, non-compliance, audits and other problems. 

Solutions to the problem

So, what can financial services organisations realistically do to protect sensitive data in non-production environments without compromising development velocity and quality? A variety of tools and processes can be used. For instance, instead of dynamic data masking, static data masking provides irreversible data anonymisation and can deliver production-like data, using libraries of prebuilt, customisable algorithms to ensure data security and referential integrity across data sources, both on-premise and in the cloud. This allows processes such as software testing to proceed, safe in the knowledge that data is kept private and compliant. Depending on the tool being used, this can happen automatically, helping to speed up development without creating additional workload for teams.

Other protective measures include data loss prevention (DLP), a perimeter defence security approach that detects potential breaches and thefts and attempts to protect them, but it is not foolproof so it should be combined with other techniques in case it fails. Data encryption is another approach, temporarily converting data into code and only allowing authorised users access via an encryption key, but the data can be at risk of reidentification and exploitation by bad actors.

Strict access control categorises users according to roles and other attributes, and their access to data sets is configured accordingly. In general, access control is always a good idea, but there is still the risk of internal exploitation. Regular security and privacy audits are a complementary approach to prevention, and have an important role, but unless they are happening on a very regular basis, the risk is that vulnerabilities may not be found until after they have caused a problem. 

A multi-faceted approach - with the right mindset

The reality is that financial services organisations probably need to adopt a combination of these processes, together with extending a more security-first mindset and culture into teams handling non-production data. Regular communications and training will help everyone is aware of their roles in protecting data. 

Making consumer data available for development, testing, analytics, and AI teams is an integral part of how financial services organisations can quickly improve their products and give customers what they need. While protecting that data is clearly a multi-faceted challenge, there are tools and techniques available that help mitigate the risks, without increasing teams’ daily workloads, ensuring software quality and time to market and contributing to keeping projects on track. 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,103
Expert opinions
44,093
Total members
404
New members (last 30 days)
189
New opinions (last 30 days)
28,704
Total comments

Trending

Now Hiring