Blog article
See all stories »

2017 the year of software measurement and visibility?

When we managers review our team’s performance, it’s our responsibility to follow a fair and objective process that uses quantitative data to ensure objectivity. IT in the financial services industry works differently.

Ironically, given most financial services firms these days are essentially, software firms, the IT applications development & maintenance (ADM) teams integral to their success, usually don’t have an objective way to assess the performance of in-house ADM teams, leaving aside the external suppliers which many financial services organisations use to supplement their in-house skills.

Rather they rely on employee or management surveys, or coarse metrics such as project delivery or system availability to make decisions on career progression and remuneration. There are some indications that this is the year, we make progress on this.

Wrong metric wrong outcome

Financial services are largely data driven. The value of financial instruments such as bonds, shares and indexes are measured not just daily but often down to the microsecond of each working day. However, when it comes to the measurement of IT software quality and risk, organisations tend to rely on subjective data and backward pointers looking at elapsed project time and budget spend.

At a meeting I had with the COO of a large financial services organisation, he boasted that his team has data for every envelope and stamp they use to post statements to their clients. By contrast, he has very little insight into the quality and quantity of the work being delivered by his IT applications teams. This must change.

Without measurement, there is no real way for organisations to know whether the software their DevOps teams are building is fit for purpose, and no means of calculating how much Technical Debt they are storing for later, or when it needs to be addressed if high-profile outages are to be avoided. Greater visibility provided by in-depth software measurement reduces risk, cost and time. This is the only way to reduce Technical Debt.

2016 was a year of spectacular data breaches and software outages. Victims revealed included Delta Airlines, Tesco Bank and Yahoo. The message is simple: it is no time to rest on laurels, it is a time to move towards better software quality. This can be done by organisations baselining what they have, identifying areas of inefficiencies and ranking risks in an objective manner, using industry standards, so they can be tackled in order of the risk they pose.

Measurement is strategic

New technology platforms such as Cloud, IoT, Big Data and Mobile have created a flurry of digital transformation initiatives. In many cases, the 'old guard' are using these platforms to try and compete with the more tech-savvy challengers. But how do organisations know if these programmes will actually deliver substantive sustainable value? It's time for a more strategic, measured approach.

Like all other application development projects, digital transformation includes planning, change management, communication plans, organisation empowerment and alignment. Measuring the progress and success of each of these elements means organisations are not ‘flying blind’. The risk is, with such initiatives, the only metric that matters is ‘Go live’. But the opportunity here is to put in place objective metrics around software quality.

Providing a greater level of transparency across the entire IT portfolio allows senior management to make informed decisions, based on objective metrics, as to where IT resources should be spent. It could be that prioritising the next development sprint could very well endanger parts of a critical IT infrastructure. Unless organisations have a handle of it all, there is just no way of knowing.

Greater visibility, more quality, better business results

It’s not all bad news. In my experience over several decades in IT development, as soon as organisations start measuring IT quality, the performance and productivity of DevOps teams rises. Developers need feedback, based on objective measurements, not just stand-up meetings to produce quality code rather than rushing out code to meet weekly deadlines.

Common units of software measurement, such as function points, lines of code, and so on, are easily measured with an automated tool. Manual function points measurement results in a lower level of objectivity. Automated function point products now remove the manual nature of function points analysis, making it the standard unit of measure for software quality.

Without a common unit of measurement for software quality, how do organisations compare their current and future IT applications and, moreover, how do they build quality into new software projects? The answer is they do neither. Until recently and in research we shall shortly publish, software developers’ outputs are largely unmeasured. The long-term results were sadly all too clear over the last 12 months.

Some good news though. Those who are worried about sticking to their New Year's resolution to improve software quality, help is on hand. Bodies such as the Consortium of IT Software Quality (CISQ) have been helping organisations stay on the straight and narrow for years now. Together with a new focus on software quality, they may provide the additional brainpower and a proven framework to make this the year, when software quality finally improves those organisations. 

5814

Comments: (0)

Now hiring