/regulation & compliance

News and resources on regulation, compliance, legal and governance issues for banks and fintechs.
UK proposes new AI rulebook

UK proposes new AI rulebook

The UK government has published its plans to regulate the use of artificial intelligence (AI) which would see it depart from the EU's strategy to establish a centralised body to oversee the use of technology.

The UK government has regularly stated its ambition to turn the UK into a hub for AI products and services, as have other jurisdictions.

And regulation has emerged as a crucial element in ensuring a thriving AI industry given that the technology's advancement has led to an increased concern about how it is used.

Indeed the ethical debate over AI use and ensuring that algorithms are explainable in some way has even involved the Pope

The UK's AI rulebook takes a principles-based approach that will enable regulators in different industries to apply the rules as they see fit. According to digital minister Damian Collins, this will enable a "flexible approach [that will] help us shape the future of AI".

"We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work," said Collins. 

This is a clear departure from the EU's approach outlined in its AI Act in which it is looking to harmonise AI regulations across both borders and sectors through the establishment of a single centralised body to police the use of AI technology. 

The UK's rulebook refers to the EU's approach as a "relatively fixed defintion in its legislative proposals". In contrast, the UK, which desribes AI as "general purpose technology " akin to electricity or the internet, is looking to give specific regulators as much flexibility as possible when it comes to setting rules for their own sectors. 

The six principles outlined in the rulebook are:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes to redress or contestability

According to Jeff Watkins, chief product and technology officer at digital consultancy xDesign, the opposite stances boil down to the pereniall quesitons when it comes to regulation - to centralise or decentralise. 

“On the one hand, centralised regulation is intuitively safer and should enforce fairness and explainability. But it risks stifling innovation," says Watkins. 

“The six principles outlined in the paper are certainly built on solid ground from an ethical point of view, but there are still big questions left unanswered, such as to how many people subscribe to them, how companies will show their workings or mark their own homework, or how the governing bodies will interpret and implement the principles?" he asks.

“One possible outcome is other legislation is introduced to guard against wild abandon (such as in the General Data Protection Regulation’s mandate of a ‘right to explanation’), but that's only invoked when a problem has already occurred. The AI Act itself is known to have limitations and loopholes, but it does attempt to shift direction on the legal and ethical concerns with the applications around AI," said Watkins

“Doubtless the benefits of both approaches will continue to be carefully assessed as the world of technology and AI continues to rapidly evolve. But there’s no getting around the fact that this emerging technology will need proper governance in place to foster complete trust from the public,” said Watkins. 

Comments: (0)