22 May 2017
Julian Farley

Julian Farley

Julian Farley - OpenWay Europe

2Posts 7,604Views 2Comments

How many more software glitches in 2013 ?

08 February 2013  |  4526 views  |  7

The BBC recently published an interesting article “Why banks are likely to face more software glitches in 2013”.  This is something that we should all be concerned about.  As payments become more of a commodity, we expect availability and demand convenience.

We all struggle to manage our finances, make sure we pay our bills on time, only to be let down by the bank and a software glitch.  Remember the failure of the Faster Payments system at Lloyds Banking Group?  Today payments are as important to us as our Gas and Electricity supplies.   The major difference is that the testing requirements for Gas and Electricity appliances are regulated.  You would not invite a gas fitter into your house who was not Corgi registered.

So why do banks have these issues? One answer is that the software used today is so complex.  When new functions are coded banks lack the ability to ensure that nothing will go wrong with the existing system.  This was seen last year with the software upgrade which effected millions of customers at RBS, NatWest and Ulster Bank.

The problem of software glitches could also be related to the funding of these complex systems.  Industry analysts at Ovum have predicted growth of 3.4% in the IT spend of retail banks around the world in 2013.  This is in-line with the prediction of Celent released last month.  However read further and Celent predicted that 77.1% of investment in IT in 2013 will go towards maintenance. This shows that Banks are spending more to do nothing than they are investing in new technology.  The impact of this is that they have to manage change in legacy systems where the risk of software glitches constantly increases.

Banks are unable to invest in new technology partly due to budget constraints, but also due to the fact that no single person or even group of people can ever fully understand the end to end structure.  It is time that banks got back control of their payment systems in order to enable them to migrate to new systems which meet their requirements.  This would minimize the need for unique customisations and in turn help to reduce to the high maintenance costs.

The best way for banks to get back control is to simulate or model the existing system.  They will then be able to understand the functionality offered today, and therefore what is required in a new system.  This simulation can then be used to asses new systems and understand how new functionality can be offered without impacting current services.

Banks will then be able to select a new system with confidence and reduce maintenance fees, the risk of software glitches while improving customer satisfaction.

TagsPaymentsRetail banking

Comments: (7)

A Finextra member
A Finextra member | 08 February, 2013, 12:37

Completely agree on the complexity of the software! I'm often amazed how complex our systems are. However, the statement that "...banks lack the ability to ensure that nothing will go wrong with the existing system." is a bit of a simplification. When we update we test extensively for weeks on end, but often can't test for every scenario ...and I'd like to see anyone, or any industry, who can 'ensure' nothing will go wrong not matter what system we're talking about. I can't even do that with my home computer if I add something as simple as a new printer.

Your solution has merit, however it comes under the heading of "Why don't they [insert solution]?" The answer is almost always money. What bank has the budget to this?

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Chris Dunne
Chris Dunne - VocaLink Limited - London | 08 February, 2013, 13:06

You make some excellent points, but I'm not convinced that modelling the BAU system is the complete answer.  Core banking systems are rather like coral reefs - they grow organically over many years, and the stuff at the bottom is almost impossible to see or reach.  However they are delicate and small changes can affect the whole ecosystem.

This is one of the biggest problems facing banks.  They perform at scale, so any outage is highly visible, but to keep these systems going eats a great deal of their annual IT spend.  New entrants don't have that problem, and it puts current players at a serious disadvantage.

The real opportunity to change comes at a big 'life event' for a bank - structural changes (such as a merger or divestment) or when the existing system cannot meet the demands of a new regulatory requirement.  At that point a bank can justify the expense; otherwise it is very hard to build a business case for massive re-engineering.

At that point modelling does become very useful - you need to describe what the old system did, precisely and unambiguously, when building the new one.  

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
A Finextra member
A Finextra member | 08 February, 2013, 15:36

Migration of a complex system is several years of hard work, which is why banks tend to keep them going well past their "sell by" date. Equally, any such migration takes valuable resource that is needed for new business functions and also adds significant risk, so may well result in outages rather than prevent them.

There s a developing trend to develop new functions on "stand beside" (or in front or behind) systems which deliver added flexibility to the existing platform. A recent project of ours enabled a customer to reduce the cost and time to market for their new payment system requirement by 75%, without increasing risk or complexity. I think this has to be the way forward, allowing functions to be gradually migrated over time, without increasing risk.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
A Finextra member
A Finextra member | 11 February, 2013, 08:58

Will there be more IT glitches ? Certainly so, and software isn't the only area to blame. Often, such glitches have their root cause at the infrastructure level underneath, which has become too complex too.

Standard X86 servers come at pretty low prices (which is good), but running too many of those in highly complex server farms can still result in pretty high management cost and less-than-desireable service levels. Even worse than the high number of hardware components is the high number of software layers involved - and adding virtualization and cloud computing only increases complexity ...

Trying to emulate simplicity in an environment that is too complex isn't very likely to work well. Complexity kills reliability and security.

A payments server doesn't need an operating system with 50 million lines of code underneath. Going back to simpler and more robust systems will help to get us back to the reliability and security we enjoyed twenty years ago. Those kinds of systems are still around, and do run on modern contemporary technology ...

Going back to simpler systems will also free up funding to invest in more modern applications. And don't worry, even when running on much simpler core infrastructure those applications can support all those fancy and colourful bells and whistles that today's customers do expect on their favourite end user devices ...

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune | 12 February, 2013, 18:21

One thing I always hope to see in articles like these is what I call the "perfection requirement" that banks are uniquely subject to. Before shipping a car, the auto manufacturer can inspect all the units coming out of the shop floor, conduct QC, reject the subpar quality cars and prevent them from going out the door. This luxury is available to almost all industries except banking where high transaction speed precludes any inline QC. Despite that, when they work, most banking systems deliver 100% accurate results. This poses a very different pressure on maintenance of existing banking systems that few other industries face and perhaps makes it inevitable for banks to spend so much of their IT budgets on RTB activities.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Kyle Thom
Kyle Thom - Zafin - Vancouver | 14 February, 2013, 00:28 Instead of replacing legacy systems, which is akin to trying to replace an engine of a 787 Dreamliner in mid-flight, what if banks were able to wrap an innovation layer around them instead? Let's shift the focus of IT investment away from maintenance programs and back to innovation, empowering banks to deliver products and services that customers need.
Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune | 14 February, 2013, 07:14

@KyleT: Internet Banking, Mobile Banking, eTrading - Applications like these, and many more, have been around for ages and involve wrappers around legacy systems. Keen to know if you've any other type of wrapper in mind.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Comment on this story (membership required)

Latest posts from

Julian's profile

job title Regional Manager
location Pinner
member since 2013
Summary profile See full profile »

Julian's expertise

Member since 2012
2 posts2 comments
What Julian reads
Julian writes about
CardsPaymentsRetail banking
Julian's blog archive
2013 (2)

Who's commenting on Julian's posts