The statistics quoted within the Informatica report carried out by the Ponemon Institute present a picture that does not surprise me and rightly emphasises the risks around the use of ‘real’ data.
Over the last 15-20 years, the use of such data within the software testing of banking applications has become increasingly prevalent and worryingly regarded as the norm. There are several reasons for this and it is worth understanding a couple of the key
factors that have led to this trend.
First, there remains a lack of understanding around the importance of software testing and consequently a lack of sufficient investment in testing. This alone impacts many aspects of testing.
Secondly, in relation to data, there is rarely enough time set aside to design and prepare the effective tests let alone the data that would be needed to support those tests. The only option left to testers in this scenario is to get hold of a copy of production
data and use that.
Thirdly, there is a degree of laziness among development and testing teams who view taking and using a copy of production data as the ‘easy option’.
The optimum approach, and one that will provide maximum coverage of data combinations, is to run a series of controlled tests using specifically manufactured test data and then run a series of ‘exploratory’ tests using a set of desensitised production data.
Fundamentally, the importance of software testing must be understood and that includes having the processes in place that maximise the efficiency of testing and the controls over how all test assets, including test data, are created and managed.