That the payments industry is being reshaped by unprecedented waves of change is not news. A constantly evolving regulatory landscape, fast-developing technology, an influx of innovative new entrants and an increasingly demanding customer base are all driving
payments market participants to transform their businesses – and their technology – to remain relevant in the digital age.
Less well understood is the drag on payments technology transformation that the use of outdated testing methods can represent.
Some numbers provide a useful illustration. According to Capgemini’s 2017 World Quality Report, quality assurance (QA) and testing budgets have grown steadily every year since 2012. This year’s research, which brings together the views of 1600 IT executives
worldwide in industries including financial services, shows that 31% of IT budgets are spent on testing, and Capgemini predicts this will rise to 40% over the next two years. Further, the research finds that, though test automation is vital for testing efficiency,
just 29% of testing activities are automated today.
View these numbers in the context of another – that, according to analyst IDC, financial institutions will spend more than $12 billion by 2019 on the transformation of their payments systems – and it is clear that the sums of money spent on testing are very
high, and the potential savings to be made through automation significant.
It makes no sense to invest so heavily in new payment systems without also investing in systems able to test them effectively. Targeting spend at the right types of automation can not only enable financial institutions to get the best out of the 40% of IT
budgets that could go to testing: it can also enable them to reduce that expenditure significantly. The imperative for financial institutions to realise the savings that are possible is strong. Payments are being commoditised, margins are under pressure, the
introduction of new real-time rails requires additional investment (with a very unclear business case for banks in the short term) and regulation such as the revised Payment Services Directive (PSD2) is forcing financial institutions to open up access to their
customers’ accounts to make it easier for nimbler, legacy-free third-party providers to compete with them.
Intensifying the pressure on banks, cost is not the only consideration. Speed is also vital – especially given the onslaught of lithe new entrants – as is robustness, in an environment in which IT failures cause damage to customer relationships, reputations
Payments change creates many opportunities for banks, but in order to reap those benefits, banks must ensure that their systems transformation projects are implemented as efficiently, safely and rapidly as possible. In turn this means ensuring that every
aspect of their technology delivery – including testing – is fit for the digital age, optimising every point on the triangle of cost, speed and risk, and ensuring the best bang for buck through the application of the most up to date testing capabilities.
Manual Testing: A source of cost and risk
According to the Capgemini 2017 World Quality Report, the top five factors contributing to increasing test budgets are more developments and releases (52%), a shift to Agile and DevOps causing more test iterations (41%), increased challenges with test environments
(36%), businesses demanding higher quality IT (33%) and detection of more defects which leads to more/longer test cycles (31%).
There is more testing to be done, but, as the Capgemini report shows, automation rates are running at less than 30%. This means there is far too much manual testing, often facilitated by a confusing maze of standalone simulators, in-house developed tools
and code created on an ad hoc basis. In a manual testing environment, hours of time are wasted, as testers wait for access to devices or schemes to complete their work – with the waste exacerbated by the time it takes to re-set hardware and software environments
when they do become available.
Automation eliminates this waste, enabling multiple teams globally to test concurrently. By contrast, manual approaches are clearly outmoded and out of step with the over-arching drive to digitalise, and more importantly they impede banks’ ability to keep
pace with new competitors and the demands of increasingly exacting customers.
In addition, of course, manual processes are hard to scale, since throwing people at a problem typically adds confusion, cost, inefficiency and management overheads.
Manual approaches also make it more challenging for banks to leverage the latest testing techniques. Historically, testing has almost always been a necessity undertaken at the end of a project. Decision-makers and leaders want their test teams to give them
the ‘thumbs up’, and the reassurance that the new technology works and is ready to go. More modern thinking puts the focus not just on proving that software works, but on trying to break it, in order to identify, and then eliminate, weaknesses in the systems
being tested. In an automated environment, this is easier to do. Tests are consistent, can be run faster – and run over and over again with fewer overheads – and more tests can be run each time. The results are not subject to interpretation by manual testers,
and no short cuts can be made.
In other words, automated testing is more effective and manual testing less so, which means that where automation is present, firms can not only get more bang for their buck from their IT investments, but testing is safer. Where testing is absent, the danger
of releasing into the market an inadequately tested payments solution is higher. The risk of reputational damage, rapidly and widely amplified by angry customers through social media, rises as a consequence.
This is an excerpt from a white paper produced by Finextra in association with Iliad Solutions. To learn more about testing in the digital age you can view the full paper