Continued regulatory pressures and a new wave of technologies are shaping how financial institutions tackle Big Data. However, with so many vendors offering a raft of data management systems, risk managers grappling to achieve an aggregated single view,
and IT divisions tackling long implementation times with legacy systems, is it time for a new approach?
In the traditional data space, there were distinct products on the market, ranging from data storage and analytics engines to caches and quality management platforms. With Big Data taking centre stage, new alternatives are emerging with impressive feature
sets and licensing models. Financial institutions themselves have begun to introduce the new wave of mainstream Big Data products from the likes of Hadoop, Cassandra, Acunu, Mark Logic and MongoDB. These Big Data solutions are commonly applied to areas such
as centralised risk and legal warehousing, single dealer platforms and trading analytics.
A common trend arising is firms applying these new technologies on a like-for-like basis, turning Big Data into simply a project to reduce costs with commodity resources and hardware. But this is not just a race to rip and replace technology. Many financial
institutions are missing an opportunity to address the ongoing operational impact and achieve more sustainable benefits. With regulatory compliance issues taking the limelight, operational efficiency projects are all too often pushed to a back seat – driven
in part by the high-cost and slow return on investment of traditional approaches.
With Big Data projects, initial capital expenditure is one factor. However, the real gains can be found through tackling ongoing operational expenditure. Traditional big bang approaches to these types of projects are no longer viable. Instead, we are seeing
agile processes, where firms match the business problems best suited to Big Data, consider proof of concepts, and develop frameworks for iterative implementations.
Collateral optimisation is one area where we are seeing particular gains. Here, the demand for firms to have a clear integrated view of their collateral requirements and asset pools across their global enterprise is essential. Estimations of the global shortfall
in collateral range between 2 and 11 trillion USD, so any financial institution not optimising its collateral is at a disadvantage in an increasingly competitive OTC derivative trading market.
Avoiding this requires the efficient leveraging of data sets from multiple sources across the business into a single view. This collateral may be held somewhere within the organisation, raised from the market via a financing transaction or may even be sat
idly in a silo as a buffer. The ability to mobilise and post this eligible collateral has a distinct effect on a firm’s trading ability, which in turn affects available working capital, placing further pressure on meeting regulatory capital demands.
To make the best collateral choices, firms must have the broadest view of what assets they hold, regardless of which business line they fall under. As these data sets are commonly held in multiple systems, the technical challenge in aggregating the view
is huge. We are seeing cases where the right approach, with first generation optimisation tools, is achieving measurable decreases in collateral costs across the firm – largely through discovering large amounts of underutilised or ineffective collateral.
There is a long history of banks attempting to install well hyped technology, at great cost and little business benefit. By taking the longer-term view and considering more agile approaches, firms can turn implementing Big Data solutions from a risky big
bang event to an ongoing, evolutionary process of optimisation – not only delivering a fast time to market but also rapid returns on investment.