US financial regulator opened an investigation last month into claims that Apple's credit card offered different credit limits for men and women. Steve Wozniak, the co-founder of Apple
tweeted that he had received a credit limit that was 10x higher than his wife’s, despite them both sharing all their accounts. There is no doubt that the card was tested thoroughly before it went to market, but did Goldman Sachs have the resources to comprehensively
test their algorithms for unethical biases across a statistically relevant volume of real-world data? This example shows the importance of being able to test technology at scale with high-quality data sets; failing to do this only means that companies run
the risks of alienating their customers and drawing undesirable regulatory scrutiny. The challenge is that access to sufficient data, compute and expertise is not readily available and the resources that do exist are fragmented. As the industry tries to build
an entirely new financial infrastructure built on the need to make financial wellbeing more accessible to all, we need to change our approach.
A safe place
Currently, developers, financial institutions and regulators will use a technique called sandboxing, with differing incentives and desired outcomes. Developers, for example, are looking to test ideas, while the likes of banks are looking to understand how
a new process or programme with impact its current operations. And then we have regulators who are seeking to understand how existing rules and guidelines may be applied to new or changing business models.
The use of a sandbox will allow these organisations to run programmes before they go live in a contained and experimental environment. To run a successful sandbox, however, they must have access to statically relevant customer data, large volumes and varieties
of financial data such as transactions and loan applications, the compute power to run lots of combinations and permutations, and finally the availability of subject matter experts who have broad and deep domain knowledge. This approach allows them to see
how the product responds to a customer’s data, make amends to any problems that arise and test for any anomalies in the performance of the product.
Getting reliable results across the sandboxes can be difficult, however, and can lead to missed opportunities as well as negative results – as we saw with the Apple Card’s apparent gender bias. There is also limited opportunity for collaboration amongst
different companies because of intellectual property concerns, which contributes to the risk of homogenisation of ideas – especially if the sandbox is a precursor to conformance testing.
A broader data-sharing initiative is required
Going forward to ensure the industry is providing the customer with a service that is ethical, responsive and easy to understand, the sector must take a broader approach to the development of its products.
The creation of organisations like The Global Open Finance Centre of Excellence (GOFCOE), is a huge step in the right direction. The organisation will act as a global economic observatory, which uses super-computers to collate and manipulate data-sets to
help regulators and decision-makers in their policymaking, and to provide companies with data it can use to test its software and programmes. This approach means financial institutions and regulators alike will be able to test for all eventualities using a
much larger set of data.
We cannot rely on GOFCOE alone, however, but we must use the results that come from this centre as an example of the benefits of using large data-sets. Only when we’re able to develop sandbox environment which test systems based on rich data sets in large
quantities, can we rest easy knowing the solution is going to improve financial wellbeing for all and make Open Finance a reality.