Blog article
See all stories »

How PSD2 will revolutionise Government Stats

The PSD2 countdown clock is well and truly ticking and is now only a couple of months away. It’s not exactly Y2K levels of excitement, but at least no one expects planes to fall from the sky. 

Naturally, most of the discussion in the run-up has focused on what PSD2 will mean for banking.  Whitepapers from the consultancy firms have gone for the FUD jugular on this one, presenting a simple strategic choice: “Heh Mr. Banker, in a post-PSD2 world, do you want to be stuck in the commodity ‘dumb-pipe’ business? Or would you prefer a ‘land of milk and honey’ where you can re-imagine your business model and re-define customer value through the monetization of data.”  I’ve often thought such papers should end with the Arnie line, “Come with me if you want to live.” 1

There is another angle though – and one which to my knowledge no one has written about.  I’m talking about stats.  Big stats.  Big Government stats. The ones that drive policy decisions that affect you and me, such as interest rate rises.  Here’s how.

You see, in a post-PSD2 world, we will be awash with (really, really) big data.  The banks have never truly leveraged this data in all its glory. But third party providers (aka Account Information Service Providers - AISPs, aka Account Aggregators) most definitely will.  Now, at the customer level, this has some interesting possibilities.  What about the idea of never having to shop for a banking product ever again – the banks will need to come to you with their best offer, rather than you having to go find it.  Bye-bye price comparison site business model.  Then, there’s the rates and terms – potentially no longer pre-defined by the banks, but tailored to you based on risk and loyalty.  You and me my friend, the customer, we really are going to be King!

But this is data distilled to the individual (micro) level.  What about when we aggregate that data up – to the macro level?  What insights could 40 million plus UK bank accounts, for example, tell us? 

The Office of National Statistics (ONS) calculates price inflation data based on a ‘basket of items’.  Sure, they use IT systems – but primarily this requires lots of manual effort.  This is slow, error prone, costly, and inherently inaccurate by nature since the output is based on sample data.  Now, what if I - and lets say a small proportion of the 40 million plus UK current account holders (I don’t know, lets say a few million of us) - are happy for the ONS to see my spending data?  My wage data?  All the services I consume? This gives a much clearer view on price inflation. And in real-time. And without human error.

It does not stop there, though.  What if my spending habits (and my fellow few million data cousins in this experiment) change just ever so slightly for the worse?  We – as a group – generally start cutting back on ‘luxuries’ – maybe a few less cars are bought in a given month compared to an expected norm, or perhaps a few less curry’s!  Today, the BoE is reliant on what is known as ‘confidence leading indicators’, the bell weather of which are the purchasing manager’s indices (PMIs).  Yet, they are anything but ‘leading indicators’, being lagged by several weeks from data collection to data output.  In a post-PSD2 world, we move to ‘always-on’ continuous data that is not prone to the vagaries of sample bias or human error and judgement.  The BoE would have materially better data on which to make base rate decisions against. Potentially, from the data, we could see if a recession is coming, and amend policy accordingly to avert or minimise the impact.

Whilst PSD2 and big data are necessary conditions to aid better statistical insights, they are not the only conditions.  Ultimately, a new breed of data capture and analysis can only happen when cognitive computing (aka augmented or artificial intelligence) joins the party.  For Government statisticians, the next decade could be a golden era, characterised by a switch from decisions based on rear-view minimal sample data, to very large and open data sets that are real-time, continuous and very close to true population data.  All of which will lead to better economic forecasting and ultimately more effective policy decisions.

Mike Foden is the Market Insights Lead in Banking and Financial Markets for IBM Europe.  He can be contacted at http://www.linkedin.com/in/mike-p-foden

The views expressed herein are those of the author and do not necessarily reflect those of the author’s employer.

 

1      https://www.youtube.com/watch?v=T-2FkSlShqo

5658

Comments: (1)

Simon Wilson
Simon Wilson - valanticFSA - London 08 November, 2017, 10:36Be the first to give this comment the thumbs up 0 likes

Mike. I like the angle... but I have to question your final conclusion... "All of which will lead to better economic forecasting and ultimately more effective policy decisions." Assuming our politicians make decisions based on data (and therefore logic) is a big leap... whilst there is opportunity the actual impact, I think, will be much less!

Now hiring