Lord Christopher Holmes presented the following speech for the House of Lords on March 21, 2024.
I believe if we are to secure the opportunities and control the challenges of artificial intelligence, it's time to legislate AI systems that are principles-based, outcomes-focused, input-transparent, and permissioned, that are paid for and understood. I
believe there are at least three reasons why we should; social, democratic, economic.
Reason one is social. Some of the greatest benefits which we could secure from AI come in this space. Education, truly personalised education for all healthcare. We saw that yesterday, with the exciting early results from the NHS Grampian breast screening
AI programme.
Reason two is democracy and jurisdiction. With 40% of the world's democracies going to the polls this year, deepfakes, cheap-fakes, misinformation, disinformation mean we are in a high threat environment to our democracy.
As our 2020 Democracy in Digital Technology select committee report put it: ith a proliferation of misinformation and this information, trust will evaporate and without trust democracy as we know it will simply disappear to our jurisdiction, and system of
law.”
I believe the UK has a unique opportunity; we don't have to fear the first move. The EU have taken that with the EU act in all its 892 pages. The US have made an executive order, but still have yet to commit fully to this space. The UK with our common law
tradition, respecting rights around the world have such an opportunity to legislate in a way which will be adaptive, versatile, able to develop through precedent and case law.
Reason three is our economy. The PwC AI tracker says the year 2030 will see a 14% increase in global GDP worth £15.7 trillion. The UK must act to ensure our share of that AI boon. To take just one technology chat-bots,
the chat-bot global market has grown tenfold in just four years. The
Alan Turing Institute published a report on AI in the public sector this week which states 84% of government services could benefit from AI automation in over 200 different services. Regulated markets perform better. Right size regulation is good for innovation
and good for inward investment.
These are the three reasons but what about three individual impacts of AI right now? What about if we find ourselves on the wrong end of an AI decision in a recruitment shortlisting, or being turned down for a loan, or even worse, when awaiting a liver transplant?
These are all illustrations of AI impacting individuals. Often those individuals wouldn't even know, or be able to find out, that AI was involved.
We also need to put an end to the myth the false dichotomy that you either have to have heavy rules-based regulation or a free hand. The myth that we have to go and pay tribute to the cry of the frontier exists in every epoch. Don't fence me in.
Right-size regulation is good to mitigate risk and ensure public trust. It is positive socially, democratically, economically, because if AI is to human intellect, what steam was to human strength, you get a sense of where we are. It's our time to act, and is why I bring this bill to Your Lordships' house today.
In constructing the bill, I've sought to consult widely to be cognisant of the government's pro-innovation whitepaper. I am grateful for all the great work from organisations, companies, and individuals from the worlds of technology, industry, civil society
and more.
I wanted the bill to be threaded through with the principles of transparency and trustworthiness, inclusion and innovation, interoperability and international focus, accountability, and assurance.
Clause 1 sets up an AI Authority. Lest anyone fear I'm proposing a do-it-all huge, cumbersome regulator. I'm most certainly not. In many ways. This wouldn't be much bigger in scope than what the unit within the government Department of Science and Technology
are currently proposing. An agile, right-size regulator, horizontally focused to look across all existing regulators, not least to economic regulators to assess their competency to address the opportunities and the challenges presented by AI, to highlight
the current regulatory gaps such as those pointed out by the Ada Lovelace Institute.
For example, where do you go if you are on the wrong end of that AI recruitment shortlisting decision? The AI Authority would have to look across all relevant legislation, consumer protection product safety in order to assess its competency to address the
challenges the opportunities presented by AI.
Clause 2 sets out those principles. Many of them will be recognisable, as they are taken from the government's whitepaper, but putting them on a statutory footing. If they're good enough to be in the whitepaper, then we should commit to them, believe in
them, and know that they will be our greatest guide for a positive path forward. Put on a statutory framework, having everything inclusive by design, and having a proportionality thread running through all of the principles so none of them can be deployed
in a burdensome way.
Clause 3 deals with sandboxes. A practice so brilliantly developed in the UK in 2016 with the Fintech Regulatory Sandbox. If you want a measure of success, the approach has been replicated in well over 50 jurisdictions around the world. It enables innovation
in a safe, regulated, and supported environment, with real customers, a real market, and real innovations.
Clause 4 introduces an AI responsible officer. Not necessarily to be conceived of as a person, but as a role to ensure the safe, ethical, and unbiased deployment of AI in every organisation. This is not intended to be burdensome. For example, it doesn't
have to be a whole person in a startup, but that function needs to be performed with reporting requirements under the Companies Act, which are understood by all businesses. Again, crucially this requirement would be subject to a proportionality principle.
Clause 5 deals with labelling and IP. This is a critical part of how we will get this right. So, if anybody is subjected to a service or goods when AI is in the mix, it will be clearly labelled. AI can be part of the solution of providing this labelling
approach, where IP or third-party data is used, that has to be reported to the AI Authority. Again, this can be done efficiently and effectively using the very technology itself.
To the question of IP, I met with 25 organisations representing tens of thousands of our great creatives, the people that make us laugh, make us smile, challenge us, push us to places we never even knew existed. Those who make music, such sweet music where
otherwise there may be silence. It is critical to understand they want to be part of this AI transformation, but they want to be part of it, in a consented, negotiated, paid-for manner. As Dan Guthrie, the Director General of the Alliance for intellectual
property puts it “It's extraordinary. The businesses together worth trillions, take creative IP without consent, without payment, whilst fiercely defending their own intellectual property.” This bill will change that.
Clause six on public engagement, which is, for me, the most important clause in the bill. Without public engagement, how can we have trustworthiness? People need to be able to ask: what's in this for me? Why should I care? How is this impacting my life?
How can I get involved?
We need to look at innovative ways to consult and engage. A good example from Taiwan is the form of alignment assemblies, but there are hundreds of novel approaches. We should want government consultations to have millions of responses, both because it's
desirable and thanks to technology, analysable.
That's the bill. We know how to do this. Just last year with the Electronic Trade Documents Act, we know how to legislate for the possibilities of these new technologies. We know how to innovate in the UK, with Turing, Lovelace, Berners-Lee, Demis Hassabis
at Deep Mind, and so many more.
If we know how to do this, why aren't we legislating? What will we know in 12 months time? Don't we don't know now about citizen rights, consumer protection, IP rights, pro-innovation, labelling and that opportunity have transformed public engagement?
We need to act now. We know what we need to know. If not now, then when? The Bletchley summit last year was a success, understandably focused on safety. But having done that, it's imperative that we stand up all of the other elements of AI already impacting
people's lives in so many ways. As already stated, often without their knowledge. Perhaps, the greatest learning from Bletchley was not so much this summit, but more what happened there two generations before when a diverse team of talent, gathered and deployed
the technology of their day to defeat the greatest threat to our civilisation. Talent and technology bringing forth the light at one of the darkest hours of our human history.
From 1940s Bletchley to 2020s United Kingdom, it's time. Time for us, time for human lead, principle-based artificial intelligence. It's time to legislate. It's time to lead, for transparency and trustworthiness, inclusion and innovation, interoperability,
and international focus for accountability and assurance for AI developers, deployers, for democracy itself. For citizens, for creatives, for our very country. It's time to legislate. it's time to lead. Our data. Our decisions. Our AI futures. That's what
this bill is all about. I beg to move.