In the past few years, we have seen artificial intelligence (AI) plugged into everything humanly (or robotically) possible, and the banking industry is no exception. As AI seeps into every aspect of financial services, data security is becoming all the more
important. Without data, there is no AI, but without cybersecurity, everyone’s private information risks being tossed to the wolves, with bad actors running wild, Purge-style, with our bank information, personal data, and more.
So, is the industry prepared to handle the risks and vulnerabilities that AI opens up? At Sibos 2025 in Frankfurt, Germany, Finextra spoke to Dr Ruth Wandhöfer, professor at Bayes Business School and head of European markets at cybersecurity firm Blackwired,
on the industry’s approach to AI.
Dr Wandhöfer – who has extensive experience serving on boards of companies such as the London Stock Exchange Group, Aquis Exchange Group, Permanent TSB Bank and Digital Identity Net, as well as advising the European Commission, the European Central Bank,
and the Monetary Authority of Singapore – began by outlining the dangers facing the industry.
“We need a paradigm shift to predict, prevent, and defeat,” she said.
According to Dr Wandhöfer, while the industry has taken AI “in its stride”, with a burst of new solutions, initiatives, and strategies for AI-powered growth, the effectiveness of many of these solutions is “doubtable”.
So, is AI worth it? Dr Wandhöfer cited an
MIT study on AI enterprise solutions which found that 95% of organisations investing in generative AI are not getting any return, despite $30-40 billion in investment.
Evidently, AI is not a silver bullet. There need to be teams hired for compliance and programming; it needs to be tested and scaled; and the human element must be retained. In short, AI cannot be slapped on a problem at random – it has to be designed with
a specific objective in mind, and must be fed accurate, unbiased data to work effectively.
Dr Wandhöfer emphasised that the industry may be jumping the gun with AI, as the technology is not mature enough for the extent it is currently being used: “AI is a fourth grader,” she explained. “It cannot be trusted to be an agent.”
“I think a lot of people were thinking, ‘Oh, this is great. We’ll plug in AI,’” she continued. “The way they do it, they don't understand that there's a huge cyber vulnerability. There are open-source elements, and they can all potentially be poisoned from
the top, and people who implemented AI overnight realise that they have become a target for cyber-attacks. Now the industry is moving towards large language models. Either you buy them from the different providers, often Big Techs out there, or you design
your own. Either way, you need to make sure these LLMs are not poisoned.”
Now that so many companies integrate third parties into their platforms, Dr Wandhöfer added, everything is becoming increasingly interconnected and there is a vast amount of data sharing. This opens vulnerabilities; if one layer is hacked, the entire system
is compromised. The
CrowdStrike catastrophe, while not a hacking incident, is a clear example of the increasing third party interconnectivity.
Dr Wandhöfer pointed to a societal shift whereby people are being less cautious about their data than they were 20-30 years ago: “After the dot-com bubble people have moved from an information society to a disinformation planet,” she said.
People are buying into the Faustian exchange of giving their data away for ease, for convenience. Though tech mega-corporations do not give the average consumer much of a choice. It takes effort to avoid data sharing in our current online world. Being isolated
from the digital space that we all partake in is a difficult sacrifice to make.
“What's happening right now,” Dr Wandhöfer continued, “is not thoughtful. AI needs to be approached very differently because it’s not a shiny new toy. You can’t just plug it in like an API. It took ages for APIs to develop to what they are today, and they're
still vulnerable. We need to monitor all these flows all the time and have a deep understanding of the Open Systems Interconnection (OSI) model – how data moves.”
Dr Wandhöfer highlighted how there needs to be more vigilance to combat cyberattacks, as cyber-criminals are becoming increasingly sophisticated and dangerous. She explained how cyber-attackers take time to build phishing and ransomware campaigns and develop
infrastructure, after which they lurk for however long it takes to find a chink in the armour to strike.
To beat these cyber-fiends, Blackwired has launched a 3D visualisation platform where clients can see where they are connecting to third parties and when an attack is threatening the organisation. Dr Wandhöfer explained that the company has a data lake holding
11 year’s worth of cyber artifacts and specialising in demystifying the “underbelly” of cyber-hacking operations:
“We can now visualise the threat actors and what they're doing. We can visualise when they are sending out probes to check the vulnerabilities of entities – which tends to be one of the first steps of reconnaissance,” she explained. “We have lots of sensors
out in the market. We monitor the data, and with all this together with the massive data lake and our sophisticated AI model, you can actually predict events and stop them before they even happen.”
Through this program, Blackwired is able to predict what day ransomware attacks will occur with a deviation window of six days, stated Dr Wandhöfer.
Dr Wandhöfer concluded that with the support of governments, regulators, and guardrails, AI supervision and enforcement needs to be stricter: “You can have wonderful regulation, but if you don't supervise, or you don't have enough moral hazard to push people
to comply, you're just not going to get anywhere.”
One of the key tenets of banking is trust, and as AI becomes more pervasive in banking services, there will undoubtedly be further questions from consumers and experts like Dr Wandhöfer on how it should be implemented. Whether the world is ready to be totally
AI-forward remains to be seen.