In recent months we have been exploring ways of extending the working life of systems whilst requiring minimal outlay from our clients. Fortunately there appears to be a solution applicable to some types of legacy systems; Solid State Disks (SSDs).
Few business lines have the luxury of utilising the latest and greatest cache-based technology solutions, which leverage the availability of cheap memory and modern message-based architectures. Most, in the budget constrained real world, will be reliant
on work-horse solutions whose architectures date back 10 years or more. Back then memory was expensive and disk relatively cheap. These solutions, be they bespoke or off the shelf, have designs that are heavily reliant on file processing or have been evolved
from flat file storage solutions.
Given current market conditions, these systems are frequently required to perform beyond their original operational parameters and to cope with increasing volumes generated by lower and lower margins. Without the budget to perform radical solution replacement,
re-architecting or refactoring, CTOs are left with little option but to invest in “tweaking” in the hope of achieving meaningful improvement. This tweaking often increases the operational risk profile of the solution as; the skills required in order to perform
such operations are usually rare and not often found on clients’ permanent headcount; vendor support is challenged as the gap between the standard vendor and custom implementations increases; the potential for service interruption is heightened due to fragility.
As an alternative we have seen some forward thinking organisations investing in SSDs, whose prices are returning to their low-point after flood related increases and trending downward. These SSDs give these ageing systems a shot in the arm. Unsexy though
it may be, much processing is still batch based and involves many read-intensive processes to ingest end-of day positions, transactions and the like. SSDs are well suited to transitional storage of data in the ingest and output phases. The adoption of SSDs
is at a tier in the stack that is isolated from the solution code involving zero-code change. Zero-code change improvements are obviously appealing. They are less risky, can be easily tested and benchmarked, and can be predictably estimated and budgeted for.
The benefits? There are currently too few consistent data points to make a scientific analysis, each organisation’s existing implementation being custom, but we have seen order of magnitude improvements in un-tuned implementations.
So, CTOs, maybe there is space in your budget to try a proof of concept for your ageing systems? This could be your chance to give them a low-risk performance boost and demonstrate an ‘easy win’ to your increasingly demanding customers.