After six years in the making, the Data (Use and Access) Act is finally here. The Act gained Royal Assent on 19 June 2025 and is set to have a significant impact despite more controversial parts of the Bill being dropped before the law was passed.
The Data Act amends the UK GDPR, the Privacy and Electronic Communications Regulations and the Data Protection Act 2018, introducing a modernised data framework, extending "smart data" schemes, and laying the groundwork for a legally backed digital identity
infrastructure.
Here is what this means for both regulated and unregulated firms:
Key highlights and impacts
- Smart data and open finance:
The Act expands "smart data" access beyond open banking to other sectors, including payments. The creation of a legal framework for smart data will enable real-time, automated data sharing based on "recognised legitimate interest” (retained from article
6 of the UK GDPR) - a crucial element for detecting fraud in fast-moving payment systems. I expect to see innovative new tools for risk-scoring, fraud detection, and onboarding, with the potential to benefit financial institutions and consumers.
The Act also sets out consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services.
- Stronger regulatory backbone:
The Act introduces a significant increase in fines for breaches of the Privacy and Electronic Communications Regulations, from £500,000 to UK GDPR levels. Organisations could face fines of up to £17.5m or 4% of global annual turnover (whichever is higher)
for the most serious infringements.
Other changes include allowing cookies to be used without consent for web analytics, installing automatic software updates and extending the ‘soft opt-in’ for electronic marketing to charities. I am also expecting standardised APIs and governance through
Smart Data schemes, which should streamline cross-industry integration and accelerate fraud controls and compliance.
- Automated decision-making:
The Act limits the right, under article 22 of the UK GDPR, for a data subject not to be subject to solely automated decisions, including profiling, which have a legal or similarly significant effect on them.
Under the new article 22A, a decision would qualify as being ‘based solely on automated processing’ if there was ‘no meaningful human involvement in the taking of the decision’. This could give the green light to companies to use AI techniques on personal
data scraped from the internet for the purposes of pre-employment background checks for example and faced some pushback during our
debates in the Lords.
- Digital identity (A potential game-changer):
One of the most transformative aspects of the new Data Act is the introduction of a statutory foundation for a trusted digital identity ecosystem. A huge opportunity for improved onboarding experiences, reduced fraud, and lower AML compliance costs. It is
also likely to foster new middleware opportunities, such as identity orchestration, secure data exchange, and advanced analytics platforms, transforming the financial ecosystem.
While I have been pushing for this for
many years for all the potential benefits and am delighted with this progress, I am also aware that there are some serious questions about implementation and the Government’s role in digital ID.
Key questions remain
- Government intervention in digital ID:
On this last point the Government still has work to do on the details. There is a massive gap in understanding how the Government’s role will interact with the private sector in delivering digital ID, including over security and privacy liabilities.
There are currently over fifty digital ID providers certified against the rules and standards laid out in the UK digital identity trust framework – these include providers of digital identity wallets as well as the orchestration service providers of digital
wallets. This digital identity and attributes trust framework covers things like data privacy, cybersecurity, and fraud management.
Effective functioning is vital to protect all our security and privacy and engender trust in the framework. Later this year the Department of Science Innovation and Technology will launch a government wallet containing Government issued “verifiable credentials”
such as a mobile driving licence.
This raises questions over how the government wallet and other providers will interact, including who will be liable for ensuring protections when providers (either providers of wallets or orchestration service providers) share government-issued verifiable
credentials from the government wallet.
Richard Oliphant, legal consultant, has painted a picture of the problems caused by the current lack of clarity using the example of a mobile driving licence being used as proof of age when buying alcohol. Richard invites us to consider two scenarios. In
the first, the holder of the government wallet uses a certified orchestration service provider to connect to a relying party and share their mobile driving licence to prove they are over 18 when buying a bottle of vodka.
In the second scenario the holder of the government wallet can request the certified digital ID wallet provider take their mobile driving licence and create a "derived credential". This derived credential is no longer a mobile driving licence, but it can
be stored in a digital wallet and presented as proof of age to the relying party for buying the bottle of vodka.
There are now two new digital methods for age verification. But who is liable for the accuracy and integrity of the mobile driving licence and derived credential?
In the case of a mobile driving licence which is shared with the relying party via an orchestration service provider, the Department for Science, Innovation and Technology has said publicly that the government is liable for the mobile driving licence and
any government-issued verifiable credentials shared with the relying party (although there are questions about what precisely is meant by this).
But if a digital ID wallet provider creates a derived credential from the mobile driving licence which is shared with the relying party, how is liability apportioned between the UK government and the provider? The devil, as always, is in the details - and
when it is a question of privacy, security, and identity then the details really matter.
Thanks to Elton John and Sir Paul McCartney, among others, the most column inches generated by this legislation was the Lords V. Commons battle over AI and copyright. Baroness Kidron tabled various amendments designed to force transparency over training
data to allow the enforcement of copyright legislation.
AI has an insatiable appetite for data. LLMs and GenAI need a constant supply to train (and improve) output algorithms, and currently, this is happening without recognition or compensation to the copyright holders such as musicians and writers whose work
may be used to train AI models. The government stood firm against the changes despite being defeated by votes in the Lords five times and the Act has passed without Baroness Kidron’s proposed amendments.
This is another issue though that is not going away. The government’s consultation on AI and copyright ended in February. Among other options, it proposes to give copyright holders the right to opt out of their work being used for training AI. Once the government
publishes its response to the copyright consultation, it will have to consider how to proceed. Whether proposals will come in the form of a new copyright bill or an AI regulation bill, or even, as rumours have it, DSIT may be considering a joint Bill. Watch
this space.
The journey towards a fully digitised, data-driven economy is accelerating. While the benefits are clear, addressing the intricate questions of trust, security, and functionality in delivering digital identity and addressing the legal and moral rights associated
with training AI models will be paramount to ensuring trusted and widespread success.