Big Data Blog – Volcker “the Ultimate Big Data Challenge”
Forbes Magazine published an article in December with the ominous title “Volcker Compliance: The Ultimate Big Data Challenge”. The article refers to the Volcker Rule, the critical piece of legislation which forms part of the United States’ Dodd-Frank Reform
bill. This law was enacted in response to the 2008 credit crisis; the Volcker Rule specifically aims to stop commercial banks from proprietary trading, or speculating from their own accounts. By inhibiting this type of trading, the US government makes commercial
banks resistant to market fluctuations and protects their clients from bank default.
So why is this a challenge for the banks? Well, it’s not so easy to separate the trading that a bank does for itself and what it does on behalf of its clients. The way to do this accurately is to calculate the inventory age, a measure of how long the bank
has held each of its trading positions, inventory turnover, and other metrics. If a bank holds onto a position for a long period of time, it indicates that the bank is invested in that position, essentially conducting proprietary trading.
The big data part of this story comes from the mechanism to measure these metrics: on a daily basis, the bank needs to review all its trading positions (tens of millions in a large investment bank) and compare them to the positions the day before, which
have been created based on the previous day, and so on. Today’s snapshot is not enough; to make the calculation, each day the bank has to review hundreds of millions of data points – quite a challenge, indeed.
For one of our UK investment banking clients, we recently put into a production a big data solution that does just that. Using Hadoop, we designed and built a processing engine using MapReduce, which imports and transforms data from a wide variety of upstream
systems and then uses these data to calculate inventory age for all of our client’s positions globally. Since distributed computing techniques were used, the system is able to scale easily as the bank inevitably increases the number of trades it makes. The
system has been live since January and has clearly demonstrated the viability and power of big data in the bank, which now is looking to extend the functionality to include additional analysis and metrics.
It is not surprising that banks are very interested in big data technologies to improve their operational procedures and facilitate new analyses such as the inventory age calculation. A recent European IT Organization study showed that 90% of financial services
companies are increasing their investment in big data. What is perhaps surprising is that only 9% of those same companies have actually put a Hadoop system into production.
We are proud to be part of this elite group. We know, however, that it won’t be long until this last figure rises. Big data and financial services are an excellent match; the uses of these new technologies are limitless.
Blog updated: 30 May 2015 07:16:59