The Bank of England is using its new fintech accelerator to work with startups on developing proof-of-concepts (POCs) in data analytics, information security and distributed ledgers.
Launched in June, the accelerator is designed to boost the BofE's practical experience with fintech, with firms invited to apply to work with it on POCs that address challenges unique to the central bank.
Speaking at Web Summit in Lisbon this week, BofE COO Charlotte Hogg invited new applications and gave an update on the project, revealing that the bank is working with BMLL Technologies on a POC that uses a machine learning platform, applied to historic limit order book data, to spot anomalies and facilitate the use of new tools in analytical capabilities.
A second POC, with Enforcd, uses an analytic platform designed specifically to share public information on regulatory enforcement action. Meanwhile, two firms - Anomali and ThreatConnect - are working on technologies to collect, correlate, categorise and integrate cyber security intelligence data.
"Data is a key thread that runs through nearly all our work, and so technologies relating to data analytics and visualisation have the potential to develop our capabilities significantly. The evolution has already been remarkable - from efforts to begin collecting macroeconomic data consistently from around the 1920s to big data in the 1990s, with machine learning and application of artificial intelligence (AI) now moving into mainstream economic and financial analysis," says Hogg.
The banker stressed that the accelerator is part of a two-pronged approach to the fintech question: experience and research. On the research front, the bank has also been busy over the last six months meeting or researching more than 130 startups and taking part in around 25 conferences and several roundtables as it bids to understand how it can use new technologies.
Hogg says that while the bank is looking at how it can use fintech, it is also using the accelerator and the practical experience it provides to investigate the potential risks and unknown ramifications of using new technologies.
"As we have seen before, not least during the financial crisis, even the tools considered the most advanced analytically at the time may be flawed - and how would we know, if the AI techniques make the approach all but impossible to unpick. How should one govern or regulate financial services provisioned using AI? In cyber, for example, how would we judge the balance between protecting core infrastructure from insider risk whilst ensuring acceptable levels of privacy?"