Long reads

The AI-Powered Cybersecurity Arms Race and its Perils

Eyal Balicer

Eyal Balicer

Senior Vice President, Global Cyber Partnerships, Citi

The advancement in the field of artificial intelligence (AI) is still one of the most important technological achievements in recent history. The prominence and prevalence of machine learning and deep learning algorithms of all types, being able to unearth and infer valuable conclusions about the world surrounding us without being explicitly programmed to do so, has sparked both the imagination and primordial fears of the general public.

The cybersecurity industry is no exception. It seems that wherever you go, you can’t find a cybersecurity vendor that doesn’t rely, to some extent, on Natural Language Processing (NLP), computer vision, neural networks, or other technology strains of what could be broadly categorised or branded as ‘AI’.

The benefits that come with AI-powered technologies, especially in the cybersecurity realm, are clearly visible and undoubtedly meaningful. From automating manual assignments, to differentiating between benign and malicious communication streams, to discovering and correlating highly illusive patterns and anomalies that power a plethora of cybersecurity detection and prevention mechanisms.

The AI-powered cybersecurity arms race

These AI-powered technologies are able to automate, streamline, enhance, and augment security operations driven by human beings, and at times even replace them all together. However, unlike human beings, technology doesn’t possess an inherent disposition and is neutral in essence. As such, AI algorithms could be exploited or weaponised by malicious actors to pursue their own objectives, while offsetting the defenders’ edge. Some pundits even claim that the world is already in the midst of a full-blown AI-powered cybersecurity arms race.

As one ZDNet reporter envisions: “it's possible that by using machine learning, cyber criminals could develop self-learning automated malware, ransomware, social engineering or phishing attacks. For example, machine learning could be employed to send out phishing emails automatically and learn what sort of language works in the campaigns, what generates clicks and how attacks against different targets should be crafted”. With leapfrog advancements in language models such as the introduction of OpenAI’s GPT-3 NLP neural network, these ominous predictions seem more plausible than ever.

One could imagine that on the heels of the progress made through the deployment of Generative Adversarial Networks (GANs), the ability to create synthetic data that reliably mimics human-generated content, could usher in a new era of Deepfake-powered spear-phishing, Business Email Compromise (BEC), and fraud campaigns.

The challenges on the way to AI-powered cybersecurity

As organisations strive to leverage AI models to cope with these challenges and safeguard their systems and business operations, there are certain obstacles and pitfalls lying ahead that might hinder the progress of these AI-powered cybersecurity efforts. The following challenges should be taken into account, regardless of whether AI-powered threat actors are already posing an imminent threat to enterprises around the world:

  • AI Bias – A known adage in the data science community is that your AI model is only as good as the data it is fed. “If relevant datasets are not accumulated, prepared, and sampled in a calculated fashion, the subsequent artificially-generated AI models will be inherently biased and generate prejudiced results. Bias-in, bias-out,” I mentioned in my Forbes article. Thus, in many cases, algorithms are actually echoing and amplifying existing misrepresentations that are embedded in the training datasets, instead of eliminating them.

Arguably, this type of systemic bias might be less relevant in the cybersecurity realm where decision making processes are based predominantly on analysis of machine-to-machine communications and code rather than traditional human language. Notwithstanding, as social engineering is becoming more prevalent and more effective, AI-powered cyber security models are not invariable any more to inherent biases that are instilled in models, datasets, and data scientists alike. Therefore, biases that are reflected in sentiment and linguistic characteristics could disrupt and derail AI models and lead to poor generalisation and sub-par results.   

  • Adversarial AI – Adversarial inputs are maliciously tailored to manipulate an AI model’s output. These altered data points, that could seem undistinguishable from standard input, could be ingested by an AI model in two distinct scenarios: ‘in the wild’ during the inference stage, or early-on during the preliminary training phase.

The first scenario illustrates that AI models don’t operate in a vacuum and are usually interacting, continuously, with their environment (and even with external, legitimate users, such as customers and contractors), exposing the models to a wide-range of possible inputs. In this scenario, the implications of a potential adversarial activity might endure, especially when considering the nature of reinforcement learning algorithms, which are based on feedback loops that could be manipulated. Moreover, some adversarial attacks might take advantage of the model’s reliance on external interactions, aiming to expose, rather than alter, the model’s unique architecture or proprietary training dataset (and the ‘ground truth’ that letter represents).

The second scenario revolves around possible malicious interference with the datasets that are fed into machine learning and deep learning models. In this case, malicious actors aim to ‘taint’ or ‘poison’ the baseline that AI-powered controls are relying on to execute anomaly detection and other security operations, creating an ‘artificial bias’ that could degrade the model’s accuracy and reliability.

These two scenarios demonstrate that while a specific feature within a model might be the most suitable variable for the task at hand, it won’t necessarily constitute the most resilient and robust feature when faced with potential manipulation or adversarial attack. Whether the malicious actors are aware of the model’s architecture, hyper parameters, or the training set’s characteristics, or completely unaware of the model’s inner workings, these type of attacks could potentially pose a threat to important AI-based security controls that hinge on the model’s accuracy.

The potential threats posed by both AI bias and Adversarial AI attacks are aggravated by the ‘Butterfly Effect’, as small changes in one stage of the process could, intentionally or inadvertently, result in alarming implications further along the way. The inherent complexity, obscure nature, and automation of some AI models, could contribute to the persistency and resilience of these negative effects.

The industry’s response

The AI community is not stagnant and is actively striving to address the aforementioned challenges. AI bias is a trending topic, with tech giants actively looking to introduce ‘algorithmic accountability’ by, inter alia, carefully curating or enriching biased datasets, using embedded bias filters, or developing dedicated tools to provide AI ‘explainability’ and detect potential biases early-on in the AI pipeline.

The Adversarial AI debate, on the other hand, is still nascent and predominantly academic. It revolves around methodologies and metrics for AI robustness measurement, and concepts such as defensive distillation, activation clustering, or techniques that expose models to ‘adversarial’ neural networks, to make the models more resistant to adversarial attacks.

However, early signs of industry galvanisation are already apparent. For example, some companies have been assembling ‘AI Red Teams’ to “better understand the vulnerabilities and blind spots” of their AI systems as reported in WIRED, while others have released open-source software libraries to support  developers and researchers in protecting neural networks from adversarial attacks as reported in TheRepublic.

While the industry seems to be moving in the right direction and compensating security controls are abundant, cybersecurity vendors that heavily rely on machine learning and deep learning-powered technologies should take note, as the demand to address these challenges might gradually become an industry-wide prerequisite.

Comments: (0)