Nobody should rejoice in the problems faced by TSB in its migration to the Sabadell system. TSB’s customer base has suffered, the bank’s reputation has been hammered and, inevitably, fraudsters have taken advantage of the situation. Alongside the problems
faced by TSB and its owners, the industry as a whole has suffered. Every time this sort of event occurs, the media not only delights in the affected institution’s difficulties, but also reminds consumers of other similar events that have afflicted the sector.
I was talking to our Chief Operating Officer about these issues and we agreed that things needed to change. The reality for our industry is that every organisation undergoing a major migration, or even a significant release of a new system, is exposed to
the kind of risk that derailed TSB. One of the key reasons for this has been known for some time, yet remains unaddressed by the industry at large: how testing happens.
I don’t know how the test programme was designed and executed at TSB. It’s too easy to say it clearly didn’t work, because to a point it clearly did. The bank would not have committed to the migration if the testing programme had not indicated it was ok
to do so. Yet, in the final analysis, the results were clearly flawed.
I imagine the bank had invested millions in additional people to complete the test exercise. Cap Gemini’s report that testing accounts for close to 50% of IT programme costs would seem to support this, but equally likely is that the bank employed the standard
approach of using a variety of tools, mocks, stubs and other elements that generated independent results, which while proving things worked in isolation, were unable to validate the end-to-end operation of the system.
Testing technology is an area of IT often ignored. To replace the testing technology in a financial institution requires an infrastructure-type investment, yet almost all spend has to be validated at a project level. However, trying to accommodate a centre
of testing excellence concept within that type of budgetary approach proves impossible for almost any organisation. I’ve lost count of the time I’ve been told that technology teams know they need to improve the testing, and that their testing is by definition
flawed, but that the ROI model precludes any investment beyond that required for the specific project in mind. It’s clear that the industry really has to start to look at this issue seriously or face further front page headlines describing the impact of another
banking system failure.
The fact is that the necessary technology does exist. Platforms can be deployed that assure the end-to-end operation of a bank system, saving time and money, and increasing efficiency – a state often seen as some kind of unattainable Holy Grail. In addition,
these platforms can model the operation of an existing system, its behaviours, inputs and outputs, and then allow the new system to test against this model, thus enabling the completion of much greater ‘real world’ likelihood testing, and massively reducing
the risk at ‘go live’.
The banking industry must stop thinking that this issue can be managed ‘on the cheap’ and instead invest properly in technology which can cope with the scale of the task. If it continues to ignore and avoid this, it will forever be exposed to the kind of
headlines endured by the TSB in the last few weeks.