In the US there is a Big Data project to collect every life sign data from every patient. The idea is to monitor patients and their medical histories, then to review if any warning signs are consistent, say before a heart attack. Then if those signs appear
in the future with patients they can be an indicator that there is an imminent heart attack and preventive measures can be taken and the heart attack can be averted.
In the IT world we have a world of data, but typically throw it away or consider it too expensive to collect or keep.
System logs tell us many valuable things, but most are overwritten and never kept. Running diagnostics or traces is considered too much of an 'overhead' in live systems. This approach is understandable in days gone by when disk and hardware were expensive.
But Dr Moore was proven correct and hardware prices do continue to fall. So the question is has the time come where low level data is valuable and cost effective to collect on a long term basis? There are IT departments that are recovering from very public
outages today that I'm sure if they had information to prevent those would consider it very valuable. The recent announcement by the FCA in the UK highlights this is no longer just a customer service issue.
Collecting data and keeping data is only part of the solution, it then needs to be analysed. But until it is collected as part of an all data strategy, that process cannot even begin.
What's your log and diagnostic archive strategy? Is it part of your Big Data strategy? It could be potentially as valuable as the transaction data.
© Finextra Research 2016