Join the Community

23,011
Expert opinions
43,864
Total members
393
New members (last 30 days)
161
New opinions (last 30 days)
28,985
Total comments

AI Innovation vs. Security: Are We Moving Too Fast to Stay Safe?

As for now, artificial intelligence is not simply an arms race. Today, it resembles a disordered car racing, where companies accelerate without looking in the rear-view mirror. 

 

Microsoft is pouring $100 billion into Open AI, Google is building up Gemini's capacity, and Amazon is betting on Anthropic. The artificial intelligence race is not just a game of innovation, for many it seems to be a capitalist battle where the one with the most money and the least brakes wins.

 

In such a race for leadership, companies should not forget about one important thing — cybersecurity. The ultimate goal to be the first and the best sometimes may mislead and result in dangerous data omissions. 

 

Why do companies sometimes neglect security, and what can regulators and AI developers do about it? 

Trust issues

The recent appearance of DeepSeek has shaken the market. This is not just another tech startup from China — it’s a bold rival to Silicon Valley that caught it completely off guard. The matter is this model’s capabilities can be comparable to the latest OpenAI releases but its developers claim that they have spent much less money than their competitors. 

 

However, the fast development comes with a cost. According to Robust Intelligence, the University of Pennsylvania, and Cisco, hackers bypassed DeepSeek R1 model safeguards with 100% success in lab tests. Vulnerable to jailbreak attacks and easily manipulated, it opens dangerous doors for potential cyber threats.

 

DeepSeek is just one case. Yet, there are not so many companies and AI developers whose models can be called trustworthy. Take, for example, Meta's Llama 2, which has been exploited to generate malicious code. Or even OpenAI’s scandal 2023: the company was accused of users’ data leakage. 

 

When AI is embedded in important internal systems or even governmental organizations, a single flaw or attack might be a serious problem. Imagine a deepfake AI-bot convincing a company’s accountant to transfer $25 million, posing as the CEO — a scenario that has already happened in the U.S.

The actions taken

Seeing the potential harm that AI can bring, governments are taking measures to neutralize it as much as possible. For instance, the UK, which has introduced a voluntary Code of practice for AI cybersecurity designed to set minimum standards of protection for developers.

 

At the same time, the EU launched a project that develops AI — OpenEuroLLM. The EU has invested more than 37.4 million euros to create an open-source model that reflects the European values of transparency, security and democratic oversight. But it’s not an end: the EU plans to mobilize a total of 200 billion euros for artificial intelligence investments in the region. European Commission President Ursula von der Leyen highlights that the USA or China should not take over AI leadership. 

 

Nevertheless, these approaches face a common problem, it seems they are not yet able to effectively protect the global market from potentially unsafe AI models. Voluntary mechanisms, such as the British Code, do not contain stringent enforcement tools. Meanwhile, although OpenEuroLLM helps to strengthen the "digital sovereignty" of Europe, its limited scope does not oblige all developers in the world to adhere to uniform security rules. 

Who is responsible?

For better and safer AI use, а balanced approach is essential. Extremely strict regulations may stifle proactivity and hinder development, while the absence or softness of rules can lead to negative consequences for end users. Therefore, regulators must exercise careful judgment to strike an optimal balance in managing the situation.

 

The issue becomes even more acute if we take real-life examples. Despite the fact that OpenAI’s latest models are more "sophisticated," the company still is often criticized for insufficient security measures. The matter is that the cases of using their AI for cybercrime tasks, generating deepfakes or spreading disinformation have proved one more time that no developer can give one hundred per cent guarantees. AI becomes similar to a hammer, it is just a tool — it can be used to build or destroy, so it can be both helpful and harmful. But when AI is misused, who should take the blame? Is it the creator, or the person wielding it?

 

The root of the problem is that the capabilities of AI outpace existing regulations. Technology companies seem to strive for leadership, focusing on immediate commercial returns, and sometimes not paying sufficient attention to possible long-term risks. At the same time, users are rarely aware of what data and to what extent the models process it. As a result, AI development is proceeding at maximum speed, while a responsible approach to cybersecurity and possible damage caused may still remain "behind the scenes."

Final remarks

What can fix the situation? Of course, cybersecurity experts increasingly point to the need for stricter accountability frameworks. In my opinion, developers should take a risk of the possible vulnerabilities of their models with a responsible attitude. And before going into mass use, AI systems better be independently audited and tested for cyber attacks in order to identify weaknesses in advance. 

 

At the same time, the lack of harmonious regulation is a hurdle to safe AI adoption. Without uniform standards, the risks may only increase.

 

However, introducing aligned rules on AI on a global scale seems to be impossible so far. For example, in February 2025, at the Artificial Intelligence Summit in Paris, the United States and Britain refused to sign an international declaration aimed at promoting inclusive and sustainable AI development. 

 

And here lies an unsettling truth: while countries compete for power and companies chase profits, AI keeps growing without limits. The real question now is not whether we can make the strongest AI ever, but how to guide it responsibly toward a future that benefits us all.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

23,011
Expert opinions
43,864
Total members
393
New members (last 30 days)
161
New opinions (last 30 days)
28,985
Total comments

Now Hiring