Blog article
See all stories »

Banking security under AI control

When thinking of AI in the context of banking, many people immediately start considering the uses of it to be either customer related or somehow used for optimization processes. Even when considering the security applications of AI, the primary purpose that banks think of applying the technology to becomes fraud prevention when in lending operations. There is certainly space for that application - fraud prevention can, after all, be an extremely complex subject involving hundreds of documents and hours upon hours spent by professionals on identifying whether the records and documents are displaying the correct information, and even with the highest skilled professionals, human error might cause some issues for lenders. With the inclusion of AI in this process the probability of human mistakes is decreased to near zero, and with deep learning algorithms applied to the US of identifying loan security and borrower reliability, the time spent on identifying threats is decreased. All of this is generally know and accepted information in terms of what banking might look like with AI applied to it in the future. Although, when thinking of security, the applications of AI technologies go much further than simply evaluating loan eligibility of certain clients. Security is, after all, a broad term that can be discussed not only in the context of loan security but also in terms of funds security within a bank.

Securing client and bank funds through AI

The application of AI within the context of securing user accounts, funds, and bank operations is not a concept that has been recently raised, but it is a concept that has been becoming more interesting recently, in the face of ore cybersecurity threats directed at banks appearing all over the world. These cyber threats are not directed only at retail accounts or bank funds, but also at the funds managed by the banks and funds kept by companies within the company bank accounts. This makes the threats much more dangerous to the operations of financial institutions and their reliability. Still, most of the time banks are able to identify criminals and get them to be arrested. Once in custody, they use the knowledge of said criminals in order to build safer systems for their users, such as in the case of the JP Morgan Chase hack that resulted in the extradition of and subsequent cooperation by the hacker.

While the identification, capture, and extradition of criminals who damage the operations of the banks are beneficial for the increasing bank safety, the process becomes post factum. This acts as an issue, as hundreds of millions of dollars might be lost at a time, while the bank tries to solve the case. Such post factum learning prevents future similar hacks, but the problem is that the vulnerability has already been exploited, money has been lost and reputation has already been damaged. Which is why it is also important to learn how to prevent such attacks from happening not again, but before they even happen. The process of questioning and learning from perpetrators is drawn out and not always useful. There are other ways to identify vulnerabilities though and close them before they are noticed by hackers who might use them to generate money for themselves.

The solution, thought up by cutting edge developers and considered as both an asset and a threat by the Department of Homeland Security of the US, is the application of AI technology to identify exploitable vulnerabilities and close the loopholes before they can be exploited. A similar application to bank security systems might be useful in the prevention of cybersecurity issues altogether. The approach involves using deep learning algorithms and highly functional AI in order to learn how the system functions. While the AI itself might take a while to identify the issues, the nature of it allows for near constant exploration of the security system. This means that once a single threat has been identified, the AI does not stop the work it has been doing, but keeps looking for more, while the update to vulnerability can be designed by the banking security system developers and applied. Even after the solution is applied the AI will keep going back to check it, just to see whether there are other vulnerabilities resulting from said update.

Tests like these would allow the system to be upgraded in real time, over periods of years, in order to figure out what tactics, strategies, and tools cyber threats might be using in order to compromise it. This would prepare the system to resist such threats in the future, thus enabling the system to evolve with the evolution of cyber hacking technologies and the AI that is built to help protect against such threats. An ever-evolving security system is bound to be useful in the purpose of preserving the security of banking systems.

A threat as much as a boon

The problem with such AI systems is that it could become a threat. It depends on the application of the system, where if the AI is used in order to seek out vulnerabilities that can then be exploited, then it would become a dangerous tool in the hands of cybercriminals. The learning algorithms, while slow at first, would speed up in their search for vulnerabilities, eventually leading to a situation where an Ai is able to identify certain kinds of system issues within seconds of being enabled. Such a tool would allow the utilization of several vulnerabilities at the same time, a sort of a multi-pronged attack, which would be much harder to resist than a traditional single vulnerability attack.

The complexity of the technology

Although, while there are definite dangers and advantages associated with the use of this technology in the field of banking cyber security, one thing remains true - all of this is still theoretical. AI that is sophisticated enough to be used in applications such as these is expensive. While banks might have the financial resources to assist in the development of such tools, there is little incentive so far. Security systems that banks use are hard enough to compromise as they are and the cybersecurity arms race is still being won by the banks. While some banks are still looking to hire AI specialist in order to boost their investment performance, there are no special AI security projects being developed. All it would take is a single serious security breach incident for the arms race to be kickstarted in full force. For now, AI security development still remains a generally theoretical subject, even if brokers, investors, and banks are all broadly interested in the topic. This type of technology is far from being developed, as the level of sophistication and machine learning development required for this kind of abilities would be far too complex than what is currently feasible. Which is why there is no real danger in cyber threats utilizing this kind of technology in order to produce AI based security breaching “viruses” that would be able of sustaining attacks for a long time, independently of the actions of the hacker involved in the project. Or at least that will remain so for the near term, and banks know this, which is why they are not investing in AI security projects. No threat, no need to defend against it.

7832

Comments: (0)

Konstantin Rabin

Konstantin Rabin

Head of Marketing

Kontomatik

Member since

18 Aug 2015

Location

Warsaw

Blog posts

235

Comments

8

This post is from a series of posts in the group:

Information Security

The risks from Cyber cime - Hacking - Loss of Data Privacy - Identity Theft and other topical threats - can be greatly reduced by implementation of robust IT Security controls ...


See all

Now hiring