While getting to grips with open banking regulation, skyrocketing transaction volumes and expanding customer expectations, banks have been rolling out major transformations of data infrastructure and partnering with Silicon Valley’s most innovative tech
companies to rebuild the banking business around a central nervous system.
This can also be labelled as event stream processing (ESP), which connects everything happening within the business - including applications and data systems - in real-time.
ESP allows banks to respond to a series of data points – events - that are derived from a system that consistently creates data – the stream – to then leverage this data through aggregation, analytics, transformations, enrichment and ingestion.
Further, ESP is instrumental where batch processing falls short and when action needs to be taken in real-time, rather than on static data or data at rest. However, handling a flow of continuously created data requires a special set of technologies.
An ESP environment hosts a system that stores the events based on a timestamp and is often handled by open-source distributed event streaming platforms such as Apache Kafka. Stream processors are needed to support developers in writing applications to act
on the incoming data for use cases such as payment processing, fraud detection and IoT analytics.
All deal with a stream of data points that relate to a specific point in time: this calls for considering data granularity and tracking individual changes to the data which is referred to as change data capture (CDC).
After initially testing a way of creating a real-time data cache with CDC, Apache Kafka and microservices, Nationwide Building Society has gone on to build a stream processing backbone in an attempt to re-engineer the entire banking experience, across payments,
online banking and mortgage applications.
Collaborating with $4.5 billion-valued Confluent, whose founders open-sourced Apache Kafka while at LinkedIn and are rumoured to be on the route to an IPO, Nationwide believes that in order to remain relevant and compete with the likes of Monzo, Facebook’s
financial unit, Libra and Amazon Lending, legacy back-end systems must be relieved with real-time streaming of data into Apache Kafka.
Rob Jackson, head of application architecture at Nationwide, highlights that when adopting Apache Kafka, security, scalability and resilience was prioritised as with the emergence of 24/7/365 banking, there is no room for planned downtime to perform upgrades
due to the impact this has on customer experience and the business as a whole.
“Beyond this, Kafka introduces a log of data changes allowing us to (stream) process that data in real-time to derive new insights about our customers and how they’re interacting with our apps and software, and also opens up our existing data to new use
Jackson adds that it could be materialised views in MongoDB, a cross-platform document-oriented NoSQL database program supporting APIs for open banking today, but tomorrow it could be materialising that same data into Hadoop, a collection of open-source
software utilities or a Graph DB such as Neo4J, a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data.
For example, the Nationwide Speed Layer, a source of events from multiple systems that allows the bank to merge, enrich and push interesting information to customers whilst maintaining service availability despite unprecedented demand.
Speed Layer has enabled agility and autonomy in digital development teams so that event streaming and Apache Kafka can together mitigate the threat from agile challenger banks and Big Tech financial units.
Cache for cash
Traditional relational databases which support SQL and NoSQL databases present obstacles to the real-time data flows needed in financial services, but ultimately still remain useful to banks. Jackson says that databases are good at recording the current
state and allow banks to join and query that data.
“However, they’re not really designed for storing the events that got you there. This is where Kafka comes in. If you want to move, create, join, process and reprocess events you really need event streaming technology. This is becoming critical in the financial
services sector where context is everything – to customers, this can be anything from sharing alerts to let you know you’ve been paid or instantly sorting transactions into categories.”
He continues to say that Nationwide are starting to build applications around events, but in the meantime, technologies such as CDC and Kafka Connect, a tool that reliably streams data between Apache Kafka and other data systems are helping to bridge older
database technologies into the realm of events.
Data caching technology can also play an important role in providing real-time data access for performance-critical, distributed applications in financial services as it is a well-known and tested approach to dealing with spikey, unpredictable loads in a
cost-effective and resilient way.
Jackson states that “it takes the data close to the applications that need it and can be structured based on how it’s consumed for that use case. For example, getting all the data you need for a screen in the single API request instead of orchestrating calls
across multiple back-end systems and APIs.”
However, as it did for Nationwide, banks may notice that for many use cases, a standard read-through cache, where responses from requests to back-end systems are cached, may not meet requirements. Instead, adoption of a design based on a constantly up-to-date
pre-populated cache with data routed and processed through Kafka from multiple data sources and taken into caches as needed may be the best solution. This, for example, would provide customers with real-time transaction lists on their app, as Jackson elucidates.
“Although Speed Layer started out as a cache, it’s only one of the things we can do now, and this will continue to grow as we move more events through Kafka.”
Open banking and beyond
Rolling out major transformations of data infrastructure does not mean ripping out and replacing existing data infrastructure with technologies such as Apache Kafka or Nationwide’s Speed Layer. Jackson says that this would simply not work for a large financial
“What we’re doing is adding to our data infrastructure so that we can better represent the existence of significant events with the plan that eventually Kafka will become the central nervous system that connects most of the different technology systems within
the bank.” While these transformations should not be considered as part of an overhaul, real-time event stream processing can benefit use cases such as open banking.
The need to optimise systems to handle the influx of data that open banking requires was one of the drivers for the Speed Layer, and in turn, drivers for cost-effective scaling, resilience and enabling adoption of cloud. “The considerations here were the
same as when architecting any enterprise system: resilience, scaling, cost, using the right tech for the right job, the longevity of the solution and so on.”
With large amounts of events being generated by customers and technology’s ability to analyse and glean insights from said events in real-time, banks can create timely experiences for their customers. Also, by responding to situations in a timely manner,
banks can establish new ways of engaging with their customers and nurturing the relationship.
More importantly, a bank’s success ultimately depends on how satisfied customers are. By adopting core banking solutions with an event-based architecture, banks can dramatically improve customer interactions and respond to their customers in meaningful,
personal, and opportune ways.
For Nationwide, the Speed Layer will exploit more and more data for open banking, real-time analytics, machine learning and data aggregation from a variety of sources and deriving interesting events to push to customers. But how long can this take and how
far removed is this re-architecture of processes and infrastructure around customer needs removed from extract, transform and load (ETL)?
While utilising Apache Kafka is an evolution and as Jackson says, “there are always more jobs it can do and use cases it can evolve to,” a challenge that persists is using the technology to move and process data because it is so much different from ETL and
requires different thinking, tools and skills.
“ETL tools are mature with well-worn approaches to data issues, so, using Kafka meant having to work out some of these problems for ourselves. However, ETL would not have given us the source of events we were after and timeliness of data would have been
Not a replacement
What is evident is that working and efficient IT models are not being replaced. Alongside ETL, the original online transaction processing (OLTP) systems are “still very much alive and well in banking” as Jackson surmises. “Instead, Kafka adds to our OLTP
systems by understanding the events that occur in those systems and then storing and reacting to them all in real-time.”
Moving data from transactional databases into platforms dedicated to analytics is beneficial for workload isolation so that queries cannot impact the source transactional application. In addition to this, moving data from OLTP systems can support workload
optimisation and analytics platforms that are designed and configured for high-volume and ad hoc querying of data, for example.
Further, siloed data can be gathered in one place and storing large volumes of historical data in such an environment can be cost-efficient. Traditionally, obtaining data from a source system and getting it to a target was a batch-based task, but the limitations
are clear and event stream processing is the future.
“This is a big shift and creating a central nervous system of IT doesn’t happen overnight. Being able to bring data from systems that have typically operated in siloes and acting on data changes in real-time brings opportunities we’ve only just started exploring.
We are continually adding to our Kafka capabilities and running more applications through it but we’re only really at the start of our Kafka journey, for us, it’s an ongoing initiative that doesn’t have an end date.
“However, the beauty of working on Kafka with Confluent is that we’re getting value right at the start. We don’t have to wait for a 'ta-da' moment, it’s provided value from the get-go,” Jackson concludes.