The Big Data dilemma

The Big Data dilemma

How can we ensure that Big Data does not make us the prisoners of technology? That's the question posed by Charles Randall, chair of the Financial Conduct Authority and Payment System Regulator at a Reuters Newsmaker event in London.

Most of you will have interacted with several algorithms already today. In some cases, more algorithms than people. Algorithms are of course simply sets of rules for solving problems, and existed long before computers. But algorithms are now everywhere in digital services. An algorithm decided the results of your internet searches today. If you used Google Maps to get here, an algorithm proposed your route. Algorithms decided the news you read on your news feed and the ads you saw.

Algorithms decide which insurance and savings products you are offered on price comparison websites. Whether your job qualifies you for a mortgage. Perhaps, whether you are interviewed for your job in the first place. Algorithms may have helped you find your partner. Right now, your partner may be using an algorithm to find someone better.

This is why some have said that in the future we will live not in a democracy, where the citizens decide how we are governed, nor in a bureaucracy, where officials like me decide, but in an algocracy, where algorithms decide.

In the 1960s TV series The Prisoner, Patrick McGoohan plays a character who is abducted and held captive in an oppressive and surreal community called the Village, where the people have no names, just numbers. He is Number Six. It’s impossible to know whom he can trust and who or what the mysterious Number One is that sets the rules of the Village. He is subject to constant surveillance and manipulation.

How do we make sure that our financial future is not like the Village, governed by rules set by unaccountable data scientists, and implemented by firms who may not even understand how these algorithms reach their decisions?
Now, I want to argue today that we should be optimistic: advances in data science have already brought huge benefits to society, such as smarter ways of detecting financial crime and market abuse, cheaper and faster transactions and greater access to affordable financial advice and guidance. The UK FinTech industry is world leading and bursting with new ideas.

But there is no room for complacency. Three factors could come together to make an algocracy more than just science fiction:
1. Big Data: rapid advances in the ability to store data cheaply have created enormous and detailed datasets about many different aspects of our lives. The largest of these datasets are held and controlled by a small number of big corporations. As we move towards the internet of things, where our cars, our homes, our electrical appliances, our watches, our fitbits and our phones produce more and more data, these datasets could explode further.
2. Artificial intelligence and machine learning: vast improvements in processing power means that corporations can mine these Big Data sets for patterns more effectively than ever before. Whereas in the past firms could only target broad groups of consumers, these patterns can now be turned into conclusions about us as individuals. They can make predictions about our future behaviour, and then decide which products and services we should be offered and on which terms.
3. Behavioural science: as firms understand more about human behaviour, they are able to target their sales efforts using ‘nudges’ which exploit our decision-making biases, informed by the Big Data about us that they hold. Some nudges may be in consumers’ interests, as with auto-enrolment for pensions, but there is the potential for them to be used against our interests too.

Number Six protests: ‘I am not a number – I am a free man!’ Today I want to set out some of the questions we need to debate and resolve if we are going to make sure that technological innovation remains a force for good in financial services and that it protects and enhances our freedom to make good decisions as people, not just numbers.

It's an important topic, given the UK’s global leadership in both technological innovation and financial services. If we can combine these skills with fair standards and with public trust, we can maximise the opportunities for the UK finance industry to succeed in the global market. And we can revolutionise the quality, price and accessibility of financial services for consumers.

Are we free?

The use of Big Data, automated decision-making and behavioural science is already raising some fundamental questions. To quote from Nicholas Carr’s book The Glass Cage: ‘Automation confronts us with the most important question of all: What does human being mean?’

The liberal approach to markets rests on the assumption that a consumer can and will make good choices if she is given fair disclosure; and that it is therefore fair to hold the consumer responsible for those choices. Indeed, the legislation under which the FCA and the Payment Systems Regulator operate requires us to have regard to the general principle that consumers should take responsibility for their decisions.

Increasingly this assumption is being called into question. It is certainly questionable where vulnerable consumers are involved. In their excellent book Scarcity, Sendhil Mullainathan and Eldar Shafir document how poverty and other disadvantages can impose a ‘bandwidth tax’, reducing the ability of individuals to take responsibility for their decisions. But biases affect all of us and are prevalent in our decisions about most if not all financial products. That’s why the FCA has for some years led the way among global conduct regulators in incorporating behavioural science into its regulatory toolkit.

Financial education is also important but it will never be a sufficient protection against firms selling bad financial products, particularly for those coping with a range of stresses in their lives.

And many, including the Financial Services Consumer Panel, have questioned the very foundation of informed consent on which online contracts, including contracts for the use of personal data, are built.

Research conducted in the US some 10 years ago estimated that if the typical individual using online services actually read the privacy policies before clicking ‘Agree’, she would spend around 250 hours per year doing so. That’s about 6 working weeks every year. Imagine how many weeks you would have to spend to read all the terms and conditions you click through today, given the growth of online services in the last 10 years.

The power of Big Data corporations and their central place in providing services that are now essential in our everyday lives raise significant questions about the adequacy of global frameworks for competition and regulation. The ordinary consumer may in practice have no choice in whether to deal with these corporations on terms which are non-negotiable and are often too general to be well understood. And without access to the data which consumers have signed – or clicked – away, new businesses may find it very difficult to compete.

If you add all these factors together, they call into question the adequacy of the traditional liberal approach to the relationship between financial services firms and their customers. And regulation is central because it will help define whether AI and Big Data liberate customers, or disenfranchise them.

Some real examples

If this all sounds a bit philosophical or theoretical it’s worth pausing to think about some real-world examples.
Some are good. Think services like microinsurance, where AI promises to increase coverage for people on low incomes by improving risk modelling.

But some are potentially problematic.

The New York Times reported that some credit card companies in the US started cutting cardholders’ credit limits when charges appeared for marriage guidance counselling, since marriage breakdown is highly correlated with debt default.
There were a number of media reports earlier this year claiming that price comparison websites quoted significantly higher car insurance premiums for people with names suggesting they are members of ethnic minorities. If that’s true, are they being treated as people, or as numbers – as mere data points?

And it’s well known that the FCA is concerned about firms using their predictions of customers’ propensity to shop around in imposing significant price rises on customers who do not. Not least because these customers may be vulnerable and such price rises may unfairly exploit that vulnerability.

Some may say ‘caveat emptor’, or ask why firms should not maximise their profits using all means at their disposal. But I don’t think that would be the prevailing view in our society.

We need to anticipate the fundamental questions which Big Data, artificial intelligence and behavioural science present, and make sure that we innovate ethically to shape the answers. Society in general and policy makers in particular need to think about how to mitigate the risk that an algocracy exacerbates social exclusion and worsens access to financial services in the way that it identifies the most profitable or the most risky customers.

The foundations of good innovation: purpose, people and trust

I would like to suggest that the foundations of good innovation – innovation which benefits society and which will be sustainable – must have three elements: purpose, people and trust.

Purpose

The FCA has identified a firm’s purpose as perhaps the most important driver of the firm’s culture. Leaders of firms need to be clear about what their firm’s purpose is and to communicate it clearly throughout the organisation.
Purpose is the most important lens through which to examine technological innovation. Is the purpose of the firm, is the purpose of its innovations, to create long term value by helping consumers, or is it simply to maximise revenue by exploiting consumer bias? Value creation which helps consumers will be part of a sustainable business strategy. Maximising revenue by exploiting consumer bias is unlikely to last long; and it’s unlikely to end well.

The availability and use of very detailed personal data about consumers may even call the purpose of some existing business models into question. Insurance, for example, is about pooling risk. The social benefits of pooling may be eroded if insurance is ‘over-personalised’ and becomes unaffordable for people whose genetic inheritance identifies them as high risk. Equally, there is a danger, in time, of excluding those who have an impoverished data history and cannot provide the number of inputs the algorithm requires.

On the other hand, new ways to explore existing datasets could open up financial services to those who have been excluded from them by traditional ways of making decisions. UK FinTech firms are already using transactional data to identify creditworthy loan and mortgage customers who would be excluded from these products by traditional credit scoring risk assessments because of irregular earnings histories or long periods of renting. The Treasury’s Rent Recognition Challenge produced some great ideas for harnessing FinTech to address financial exclusion, highlighting that Big Data can be profitable at the same time as it expands markets to serve a social purpose.

People

The second element of the foundations for good technological innovation is people.
We must remember that however sophisticated automated processes may become, people must remain involved at all stages. We may one day face a world where machines can design and build more machines, but people will start that process and people must remain in charge and accountable.

People must be at the centre of the firm’s purpose. People must be accountable for innovation through sound systems of risk management, governance and control.

But putting people at the centre of things is not just a question of purpose and governance; it’s also a question of design. Technology needs to be implemented with human judgment in all aspects of its use. People, not machines, need to understand and control the outcomes that the technology they are designing is producing; people, not machines, have to make the judgment as to whether these outcomes are ethically acceptable – and ensure that they don’t just automate and intensify unacceptable human biases that created the data of the past. A strong focus on checking outcomes will be essential as some forms of machine learning, such as neural networks, may produce results through processes which cannot be fully replicated and explained.

There’s also a danger that the use of technology will degrade people’s willingness to judge and intervene, because they feel that they are less personally connected to consumers and consumer outcomes - the logic of the machine has taken over from individual responsibility. George Dyson asks the question: ‘What if the cost of machines that think is people that don’t?’

Rather as in the famous Milgram experiment, where the subjects who hesitated to inflict what they thought were electric shocks were told that ‘the experiment requires you continue’. So firms need to anticipate the effect that more technology will have on their culture, and design systems to maintain good judgment. This includes promoting a culture in which individuals are willing to question the results of automation. As Donella Meadows has written, ‘Living successfully in a world of systems requires more of us than our ability to calculate. It requires our full humanity – our rationality, our ability to sort out truth from falsehood, our intuition, our compassion, our vision, and our morality.’

After all, key assets of the UK financial services industry are its accumulated human experience and the quality of its human judgments. We must not lose these.

Trust

The final element of the foundations we need to lead the world in financial innovation is trust.
Trust is the most important asset of any financial services business. The City of London Corporation launched its ‘Business of Trust’ campaign last year, highlighting the key elements of creating and maintaining trust. Purpose and people are two of these, but creating and maintaining trust also requires that firms are part of the communities they serve, share society’s values and communicate well.

Firms need to be part of the communities they serve so that they can understand society’s views of the fair use of personal data. These views are complex and changing. You can’t design an algorithm which will tell you what is fair. It may be acceptable to use detailed data about people’s eating and drinking habits to price their health insurance, but is it fair to use that data to price their mortgage? These are incredibly difficult judgments which depend on context, and which require firms to be part of society so they know what society’s values are.

Firms that are well connected to the values of society find it easier to articulate their own values, which in turn engender trust.

And finally, trust requires good communication so that consumers understand and accept a firm’s approach to using their data. By good communication, I don’t mean pages and pages of obscure disclosures, disclaimers and consents. I mean short and readable statements which make it clear what firms will and won’t do with their customers’ data. These need to be developed with consumers, not imposed on them. A number of firms do this already but many do not. Should all businesses have a data charter? Should these be developed through voluntary codes of practice? Will the industry take the lead or should they be a regulatory requirement?

Conclusion

It’s a great time for people to join this important debate because there is so much more thinking to be done. Technological innovation in financial services brings together two of the UK’s greatest assets and gives us the opportunity to lead the world in FinTech. I would like the UK regulatory community to add another asset – the leadership in regulation that it has already shown through programmes like the FCA’s Project Innovate and the Sandbox, while safeguarding high standards of consumer protection. The UK already has a trusted legal and regulatory system and contributes to setting global standards of corporate governance and business ethics. We need to contribute to new standards for data ethics too. The FCA and the PSR are fully supportive of the government’s proposals to establish a Centre for Data Ethics and Innovation to ensure that the UK sets the highest standards for ethical conduct in harnessing the power of Big Data.

The financial world is currently full of exciting new ideas. But sustainable financial innovation will be built on purpose, people and trust, which have been the key ingredients of the best financial businesses for centuries. And in that sense - if in no other - Mark Twain was right to say: ‘There is no such thing as a new idea’.

I am very grateful to Andrew Bailey, Chris Woolard, Tom Matthews and Ian Runacres for their comments on this speech. Nevertheless, views expressed in this speech are personal and do not necessarily reflect the views of any of the above people or of the FCA or PSR.

This is the speech as drafted and may differ from the delivered version.

Comments: (0)

Trending