What is Integrated Data Management?...
At a seminar a week or so back, I was asked the somewhat academic question was "What is Integrated Data Management?". Certainly everyone seemed convinced that there would be less "Enterprise Data Management" (EDM) projects in future, given the expense, scope
and scale of such projects. The concensus was that whilst the need for data management was better under stood across all financial institutions, data management projects would be bitten off in more manageable chunks by asset type, business function or division
(so are silos back in fashion I ask myself?!).
Coming back to the original question, I guess my slant on Integrated Data Management is that we are seeing more and more data management projects that have an integrated reference data and market data elements to them, primarily driven by the need to sort
out data quality/completeness/depth for use within risk management. So maybe some of the drivers behind data management projects are changing in light of crisis to be more risk focussed.
The Data Gap Between Front and Back Office...
Related to risk management, given the origins of data management for STP/back office, and given the interest in low latency tick data management/analyis in the front office, there seems to be a market gap (particularly in the US?) on how to manage data such
as IR/credit curves, volatility surfaces and other derived data sets. These data sets seem to fall into the gap between what is thought of as market data (primarily just prices) and what is reference data (IDs and terms & conditions). This is another area
where a more integrated approach to data management would be beneficial, particularly in making all these datasets available for risk management.
Unstructured Data Management...
Hoping the above title is not a contradiction in terms, but I think it would be good to raise the issue of whilst it is fine to be doing great data management (high quality, complete datasets etc) what is the point if all of your data is ignored by the front
office and Excel is used to download the data traders and risk managers need from Open Bloomberg. I think the management of unstructured data (spreadsheets, word docs etc) needs to be elevated as an issue since this (unfortunately?) is where most data resides
currently, despite what we data management professionals like to think.
Centralising Other Things, Not Just Data?...
I also think that the principles of good data management (centralisation, quality and transparency) could apply to other things and not just raw "data", but what about centralised pricing and valuation, centralised curves and centralised scenarios for risk?
Again what is the point of doing good data management if the ultimate "information" (e.g. a valuation) is done using poor quality data, with a complete lack of transparency over the data and model used.
More Robust Pricing Models...
A good question at the seminar was asked about models, which was that given pricing models and their weaknesses have formed some part of the recent crisis, do we need more complex models? On having a few conversations about this and thought about it some
more, then my view is that we do not necessarily need more complex pricing models and valuation techniques, but we certainly need more robust ones which does not necessarily imply more complexity. Coming back to a point raised by
David Rowe previously, then I think all quants and risk managers should think about a "second means of valuation" for all the theoretical models they use, and that
hedgeability (see recent
post on pricing model validation) seems to be the common theme in producing more robust pricing models.