Blog article
See all stories »

Making sense of the Data Lake

Today, receiving and providing accurate data, is king. But rather than firms searching for more data, they are now looking at how best to store, access and manage the data that already exists. How can you tap into your data lake to not only meet regulatory requirements, but to also start innovating by applying that data in new and interesting ways? How do you manage your data protection responsibilities in the process?

Wherever you sit, whether buy-side, sell-side or a vendor - challenges need to be overcome in order to meet existing and forthcoming reporting requirements. Firms must grapple with intraday and intra-minute data feeds coming in from various external sources and in differing formats, which must then be streamlined into a format that flows easily into existing systems.

Once the data is captured, there is a further challenge in how that data is reported back to regulators in a timely and accurate fashion - with different regulations requiring different formats and levels of granularity. For example, under MiFID II a slew of data needs to be streamlined into one transaction hub in the bank, which becomes a huge challenge for many firms who may be using legacy systems or that may even still confirm trades via manual means for illiquid securities. But failure to comply is not an option, as regulators are not messing around when it comes to punitive public sanctions for firms’ failures to get their data houses in order.

In the MiFID II era, firms must provide 65 field transaction reports which range in detail from asset type to issuer, to maturity, from seller to transaction creator to end recipient. The advent of SFTR is expected to drive additional demand for the transformation of data from existing internal messaging formats – such as FpML or FIX – to ISO 20022 to meet new reporting requirements.

Where does one even begin to unravel these data layers? One option of course is for firms to create large, in-house teams to constantly monitor, manage and correct incoming and outgoing data – typically by essentially “throwing bodies” at the problem. As the problem continues to grow relentlessly in complexity with increasing moving parts, so do the risks of getting it wrong due to human error.  Specialist fintech vendors with a detailed understanding of the regulatory landscape can seamlessly integrate with existing systems and help streamline the entire data capture and reporting process, working to automate and integrate previously densely complex and inefficient processes.

If firms can ensure data is streamlined, verified, audit-ready, and conforms with regulatory reporting standards, the pain associated with this process is virtually eliminated. It’s only once data processes have been improved, that firms can start to innovate and offer truly exciting new products and services, by applying innovations such as AI, robotics and machine learning. And with all the data available today, this is real possibility, but first you must unravel and streamline the data that exists. It’s still the case that putting meaningless data “in”, means receiving meaningless data “out”.  Find the solution that fits with your business, and this conundrum is solved. 



Comments: (0)

Member since




More from member

This post is from a series of posts in the group:

Capital Markets Technology

Front Office Trading Trends and Technologies...

See all

Now hiring