The vital importance of data is well understood in the financial services community. Banks everywhere have spent years investing heavily in a bid to extract value from the colossal and ever-expanding reserves of information at their disposal. Becoming a
‘data-driven’ organisation has been, and remains, a top strategic goal. The aims of this endeavour include the provision of a hyper-personalised customer experience, the reduction of operational costs through data-driven optimisation, and the ability for employees
to access trusted data through the best possible analytics and business intelligence tools.
In truth, the history of extracting value from data has been somewhat chequered. Despite the large sums that have been invested in building platforms to deliver automation and intelligence, the results have often been no better than middling.
Banks have been through a number of generations of technology in their quest. The first generation was made up of proprietary data warehouses and business intelligence platforms. Those solutions typically cost large sums to build and often resulted in reports
being generated that needed advanced qualifications in data science to understand and were primarily “rearview mirror” in nature. With the arrival of the second generation of platforms we were into the era of big data and the data lake. The outcome was a fresh
wave of monolithic and complex ecosystems maintained by a central team of hyper-specialised data engineers. Like data warehouses before them, data lakes at best left organisations with isolated pockets of usable analytics. Generation two had again over promised
and under delivered, particularly still lacking the ability to blend static and real time data for real insight.
The most obvious drawbacks of these two approaches were their multifaceted complexity, their reliance on legacy systems and a legacy culture unsuited to handling data at volume, and the persistent and ultimately flawed idea that the best way to deal with
data is to centralise it in one huge repository.
The third generation of data platforms to emerge represented something of an improvement over the previous two, able to deliver, for example, streaming for real-time data availability. These platforms were also better geared for embracing cloud-based managed
services and machine learning. But they suffered from many of the underlying characteristics that led to the failures of previous generations, being still architected on centralised principles.
Banks are now waking up to a whole new way of dealing with data, one that is neither centralised, monolithic or tied in to inflexible and expensive legacy systems. It involves taking a much more distributed approach called data mesh. A data mesh, or data
fabric, is defined as a federated data architecture that treats data as a shared asset. This model lets organisations address today’s data challenges in a unified and organic way and does away with the effort and expense of moving data to a centralised location
The truth is that data is not naturally confined to a lake or a warehouse. Data, in its multitude of forms, is all around us, ubiquitous and spread around by its very nature. The idea of a data mesh is that rather than gathering data centrally, you create
an architecture that allows people to draw on it wherever it is found. It is also an imperative foundation for any organisation's flexibility - a recent focus on credit operations and operational analytics with the impact of COVID-19 for example, has shown
which organisations truly have the ability to bring enterprise class analytics and reporting to bear.
Data mesh is a model that acknowledges the changing way that data is generated and used. The reality of the modern business landscape is a greater proliferation of data sources and a greater diversity of data use cases and users - coupled, in the majority
of cases, with a short lifetime of actionable value. It is also about a much faster speed of response to change. It’s about adaptability, flexibility and agility - with controls and governance. Distributed data means that the right people can access the data
they want when they want it, with no need to go through a filter of complex IT cycles. Data these days is a product, and its user a consumer. Mesh respects that.
The benefits of a data mesh are numerous:
It can support multiple, diverse users and use cases with shared data assets, while offering optimised data management and integration.
You can massively speed up the time it takes you to get value from data by unlocking its power wherever it lives, be that on-premise, in the cloud, or a hybrid of the two. Data is available to you at the pace of your business and not at the best pace the
technology allows - particularly with reference to mainframe data access
Data mesh empowers employees by giving them timely, consistent and trusted information when they want it. This so-called democratisation of data is all about arming people at the coalface with the ability to make faster and more accurate business decisions.
With data mesh, your organisation can embrace new ideas faster. You can enjoy advances in analytics technology and data science ahead of the competition.
The streamlining of data management and integration processes will create efficiencies and save money, plus allow you to embed AI, ML and customer self-service into your processes. You get happier and better served customers and more efficient internal operations.
You can enjoy better data governance and control so you can deliver the right data at the right time, securely, and in compliance with an ever-changing regulatory landscape.
Data mesh works well with moves to transform your organisation digitally, is a great fit with multi-cloud migration strategies and helps you to get full benefits from other innovations like digital twinning and digital resilience.
Mesh points to a future where data is easy to make sense of and easy to consume. It’s about democratising data at scale to provide business insights, taking banks closer to the full automation of intelligent decision making.