Blog article
See all stories »

The EU's Artificial Intelligence Act Could Become A Brake On Innovation

Europe is lagging behind not only the US and Japan, but also China in terms of technological innovation. According to a 2019 article on World Economic Forum (WEF), China overtook the EU with R&D expenditure equivalent to 2.1% of GDP. The world’s 15 largest digital firms are not European!

It is beyond question that Europe produces bright minds with amazing ideas and an entrepreneurial mindset. The problem is very simple: European companies do not make it beyond the start-up phase and if they do, their business is believed to be better off out of Europe. Skype is one famous example that was bought up by Microsoft. As a result, Europe is facing an annual contraction phase when it comes to market capitalisation of the Top 100 companies.

Europe is now a corporate also-ran. Can it recover its footing? | The  Economist

Source: https://www.economist.com/briefing/2021/06/05/once-a-corporate-heavyweight-europe-is-now-an-also-ran-can-it-recover-its-footing

 

The EU proposal to regulate AI will be a brake on innovation and a a challenge not to be underestimated for promising start-ups that are using artificial intelligence. According to a report of the Washington-based think tank Center for Data Innovation, a new law regulating artificial intelligence in Europe could cost the EU economy €3.1 billion over the next five years. This week, the European Commission published its proposal for a Regulation on Artificial Intelligence of the EU putting forth new rules on the use of artificial intelligence in the EU. The realization of AI projects will become significantly more difficult with the new law and leaving developing their business further outside the EU will almost certainly be likely for ambitious entrepreneurs. The US, China and Japan would welcome them with open arms.

The regulation framework proposed in the White Paper is based on the idea that development and use of artificial intelligence entails high risks for fundamental rights, consumer rights and safety. The proposal aims to ban AI systems that harm people, manipulate their behaviour, opinions and decisions, or deliberately exploit their vulnerabilities for mass surveillance. Distributors, importers, users and other third parties would also be obliged to make significant changes to artificial intelligence, market it under their own name, change its purpose or discourage adaptive use.

Regulatory framework on AI | Shaping Europe's digital future

Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

 

Key features include training, data and record keeping requirements, providing information, technology accuracy and robustness, human supervision and specific requirements for certain AI applications such as the use of biometric remote recognition. In addition to existing legislation, the European Commission is proposing a voluntary labelling scheme for low-risk AI applications not subject to mandatory requirements. 

European officials also want to restrict the police use of facial recognition and to ban the use of certain types of AI systems - one of the broader efforts to regulate high-risk applications of artificial intelligence. The EU pushes forward with the first of its kind of rules on artificial intelligence (AI) amid fears that the technology is beyond the reach of regulators. Proponents of the rules say adequate human oversight is needed for artificial intelligence. Others warn that the world's first rules on how companies use artificial intelligence (AI) could hinder innovation with lasting economic consequences. The regulatory and policy developments in the first quarter of 2021 reflect a global turning point for serious regulation of artificial intelligence in the USA and Europe, with massive implications for technology companies and government agencies. The efforts to monitor the use of artificial intelligence are no surprise to anyone who has followed policy developments in recent years, but the EU is undoubtedly pushing for stricter oversight at this time.

To meet its global AI ambitions, the EU has joined forces with as-minded states to consolidate its global vision of how AI should be used. This includes the geopolitical dimension of the European Commission's forthcoming new legislative proposal on artificial intelligence. Meanwhile, domestic AI policy is continuing to take shape in the United States, but it is largely focused on ensuring international competitiveness and strengthening national security capabilities.

On 11 February 2021, the European Union (ENISA) and the Joint Research Centre (JRC) of the European Commission released a joint report on the cybersecurity risks associated with the use of artificial intelligence in autonomous vehicles. The report makes recommendations on how to mitigate such risks in a cybersecurity report. In June 2019, China’s National New Generation Artificial Intelligence Governance Committee predicted harmony, fairness, justice, respect for privacy, security, transparency, accountability, cooperation and ethical principles for controlling AI development.

Europe is discovering AI, and the European Commission has recognised the need to take action to cope with the technological changes caused by AI technologies. The European Union surely wants to avoid the worst of artificial intelligence while at the same time trying to increase its potential for the economy in general. According to a draft of future EU rules obtained by Politico, the EU will ban certain applications of high-risk artificial intelligence systems and will prohibit others from entering the bloc if they do not meet EU standards. Companies that fail to comply could be fined up to 20 million euros, or 4 percent of their turnover. Proposals to require non-medical algorithms to conduct pre-market studies could also harm the development of artificial intelligence, as these studies are time-consuming and expensive. For example, fifty US states, such as New York, require autonomous vehicle manufacturers to conduct road tests under the paid supervision of the police, but testing such vehicles is expensive.

Respondents attach great importance to the EU's role in shaping a coherent strategic vision for technology policy, with 70% describing it as "very important" or "somewhat important.". This is not surprising given its prominent role in digital regulation and ambitious regulatory agenda. Digital Services Act, Digital Markets Act, Data Governance, Cloud Rules and Cybersecurity, GDPR, just to name some examples. In all these areas the role of members states has been rated worse than that of the EU, showing recognition of the desire and need for multi-level coordination between the EU and individual member states, as well as the role of each of them.

The EU's artificial intelligence act has caused high waves within a few hours after its becoming known. However, its advantages should not be neglected. Algorithmic accountability for example requires operators to use algorithms to make decisions that comply with laws that regulate people's actions, such as anti-discrimination laws and attitudes. In addition, the EU Commission is considering a temporary ban on use of facial recognition technology in public spaces for the next 3-5 years. In contrast, more than 600 law enforcement agencies in the US have started using the ClearView app. In the USA, states such as New York and Oregon, as well as a number of cities have responded to these developments by banning facial recognition technologies from police and government. 

The idea of regulating AI is not a bad one. If technology organizations are not responsible for the way they use personal data, we are creating a predatory world. We tend to assume that the real world has one set of rules and the digital world has another set of rules. The truth is that we have only one world. Criticism towards the EU's aspirations must be voiced in the sense that many companies are still trying to adjust to the EU's General Data Protection Regulation (GDPR). The EU's highly anticipated comprehensive privacy regulations should have changed the Internet for the better, but so far it has mostly frustrated users, businesses, and regulators. So it stands to reason and we are well advised to prepare ourselves for an AI act full of challenges. At the same time, it is to be hoped that important lessons have been learned from the GDPR.

13490

Comments: (0)

Now hiring