Data protection update

This update includes key developments such as the ICO-HMG memorandum on data protection, new provisions under the Data (Use and Access) Act, guidance on international data transfers and age assurance, and significant enforcement actions like fines for unsolicited marketing, misuse of biometric data, and breaches involving children’s data, alongside global concerns over AI and high-profile investigations.

General updates

  • On 8 January, the Information Commissioner’s Office and His Majesty’s UK Government (HMG) signed a Memorandum of Understanding (MOU) to formalise their shared commitment to improving data protection standards which includes appointing a Government Chief Data Officer to oversee data protection risks and compliance across HMG departments and key governance boards, such as the Transformation Board and Government Security Board, will monitor data protection risks and progress.
  • On 3 February, the ICO opened formal investigations into X Internet Unlimited Company (XIUC) and X.AI LLC (X.AI) covering their processing of personal data in relation to the Grok artificial intelligence system and its potential to produce harmful sexualised image and video content.
  • On 5 February, most of the remaining data protection provisions of the Data (Use and Access) Act have come into force, except for the requirement for organisations to have a complaints procedure which is due to commence on 19 June 2026 and some ICO governance provisions which will follow at a later date. Such provisions now in force include only having to carryout, a “reasonable and proportionate” search in response to data subject access requests and the maximum fine issued under the Privacy and Electronic Communications Regulations is no longer £500,000 but, now matches the GDPR of up to £17.5 million or 4% of global turnover (whichever the greater).
  • On 23 February, privacy regulators from around the world issued a joint statement addressing mounting concerns over artificial intelligence (AI) systems that create realistic images and videos of identifiable individuals without their consent.
  • On 11 February 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued a joint opinion on the European Commission’s Digital Omnibus Regulation proposal, which seeks to streamline digital regulations, reduce administrative burdens, and enhance competitiveness across the EU. The EDPB and EDPS strongly oppose proposed changes to the definition of personal data, warning that they could narrow its scope, weaken privacy protections, and create legal uncertainty.
  • On 25 April, John Edwards, the UK’s Information Commissioner, announced that he has temporarily stepped back from his role as the ICO conducts an independent investigation into unspecified “HR matters.” Edwards, who has held the position since January 2022, announced his cooperation with the inquiry in a LinkedIn post.

Latest guidance

  • On 15 January 2026, the Information Commissioner’s Office released updated guidance on international transfers of personal data under the UK GDPR. Key updates include: a three-step test for restricted transfers and explanations on roles and responsibilities, particularly for complex, multi-layered transfer scenarios. The regulators back several provisions aimed at reducing administrative burdens, including raising thresholds for mandatory data breach notifications and extending deadlines for reporting.
  • On 12 March 2026, the UK’s data protection regulator, the Information Commissioner’s Office has published an open letter to social media and video-sharing platforms operating in the UK calling on them to urgently strengthen their age assurance measures.
  • On 25 March 2026, Ofcom and the Information Commissioner’s Office released a joint statement outlining regulatory expectations for age assurance measures under the Online Safety Act and UK data protection laws. The statement aims to help online services protect children from harmful content and data risks while ensuring compliance with both legal frameworks.
  • On 31 March, the ICO called on businesses to review their use of automated decision-making in recruitment to ensure compliance with data protection laws and to protect jobseekers from unfair or biased outcomes.
  • On 29 April 2026, the Information Commissioner’s Office (ICO) released its finalised guidance on Storage and Access Technologies alongside an update on its online tracking strategy. This guidance addresses the application of the Privacy and Electronic Communications Regulations and, where relevant, the UK GDPR to technologies such as cookies, tracking pixels, device fingerprinting, and similar tools. It incorporates updates following two consultations and amendments introduced by the Data (Use and Access) Act 2025.
  • On 14 April 2026, the European Data Protection Board announced a new Data Protection Impact Assessment template to simplify compliance with the General Data Protection Regulation and promote consistency across Europe.

Latest enforcement action

  • On 15 January, the Information Commissioner’s Office fined Allay Claims Ltd £120,000 for sending over 4 million unsolicited marketing SMS messages between February 2023 and February 2024. These messages promoted PPI tax refund services and were sent without valid consent or compliance with the ‘soft opt-in’ exemption. Allay argued that recipients were existing customers who had engaged with the company in 2019 and signed terms of engagement, which it believed satisfied the ‘soft opt-in’ exemption. However, aggravating circumstances included Allay was previously investigated by the ICO in 2020 for PECR breaches and despite the investigation and complaints, Allay failed to suspend its marketing activities, resulting in further complaints. The distress caused to recipients, as unsolicited marketing is intrusive and can lead to financial harm, particularly in the context of PPI tax refund services, which often involve high fees and hidden charges.
  • On 2 January, The President of the Personal Data Protection Office (Poland’s data protection authority) imposed a fine of PLN 978,128 (approximately €232,379) on T. S.A. for the failure to ensure the independence of the Data Protection Officer (DPO) and the absence of measures to prevent conflicts of interest in the DPO’s role. The DPO of T. S.A. simultaneously held a managerial role (Director V.) and other positions within the company. The company’s history of GDPR violations was considered an aggravating factor, as it demonstrated ongoing compliance challenges. The company resolved the identified issues by restructuring the DPO’s role before the administrative proceedings concluded. This led to a 40% reduction in the fine.
  • On 29 January, the Italian Data Protection Authority (GPDP) fined e-Campus Online University €50,000 for unlawfully using facial recognition technology to verify student attendance during a teacher qualification course. The university processed biometric data without a valid legal basis, relying on invalid consent while failing to conduct a proper Data Protection Impact Assessment (DPIA) before implementation. The GPDP highlighted several violations of GDPR, including unnecessary data retention, lack of alternatives for students, and the power imbalance inherent in requiring biometric data for course participation. While the university cooperated with the investigation and ceased using the system, the fine reflected the serious nature of processing sensitive biometric data and the large number of students affected.
  • On 13 February, the ICO and Ofcom responded to an open letter from approx. 20 MPs urging the ICO to investigate Tattle Life for potential breaches of data protection laws after the death of a social media influencer’s 16 year old daughter.
  • The ICO confirmed it has an ongoing investigation into Tattle Life, examining its compliance with data protection laws. These include obligations to process personal data lawfully, transparently, and fairly, and to address user requests for data rectification or erasure. While the ICO does not have the authority to shut down websites, it can issue enforcement notices to ensure compliance if data protection violations are identified.
  • On 19 February, the ICO won its appeal in a landmark case against DSG Retail Limited. The dispute originated from a 2020 ICO fine of £500,000 imposed on DSG after a cyber-attack compromised the personal data of at least 14 million individuals. Despite appeals by DSG to the First-tier Tribunal and Upper Tribunal, the ICO sought further clarification on a critical point of data protection law by appealing to the CoA in 2024. The court clarified that this duty applies even if the stolen data cannot directly identify individuals, recognising the broader harm caused by cyber-attacks.
  • On 3 February, the ICO reprimanded Staines Health Group for sending excessive medical details about a terminally ill patient to their insurance company, Vitality. A patient at the NHS GP surgery was diagnosed with a terminal illness and made a claim to their insurer. The insurer, on behalf of the patient, subsequently requested that five years of medical history be sent to the patient to review, before being sent to the insurer in order to progress the claim. But, instead of five years of medical history being sent to the patient, Staines Health Group sent 23 years of medical records direct to the insurer. The patient believed the excessive disclosure of unnecessary medical records led to a reduction in the payout of their claim.
  • On 3 February, the ICO issued a monetary penalty of £100,000 to TMAC Ltd for making calls promoting alarm systems and monitoring services to individuals registered with the Telephone Preference Service.
  • On 4 February, the ICO issued a Penalty Notice to MediaLab.AI, Inc. fining it £247,590 for UK GDPR breaches relating to children’s data and the absence of a DPIA. The ICO found unlawful processing of under-13s’ data without valid parental consent and a failure to complete a DPIA for high-risk processing affecting under-18s during 27 September 2021 to 30 September 2025.
  • On 23 February 2026, the ICO issued a Penalty Notice to Reddit, Inc of £14,472,500 for UK GDPR breaches involving children’s personal data and failure to complete a DPIA.

The AI-enabled threat landscape: real world lessons from lawyers, PR and cybersecurity experts

In collaboration with Sodali & Co and LevelBlue, we have produced a new report offering vital insights into AI-driven cybercrime. Designed for non-technical executives and board members, it highlights key threats, practical talking points, and actionable steps to support discussions with risk, legal, and cyber security teams.

AI is transforming the cyber threat landscape, enabling faster, cheaper and more personalised attacks while lowering the entry barrier for malicious actors. These risks pose significant financial, operational and reputational challenges for businesses.

EU AI Act Transparency Obligations: latest developments and key obligations

A core requirement imposed by the EU AI Act (the Act) is in respect of transparency obligations for the AI systems used.

The majority of the Act is expected to come into force on 2 August 2026. The European Parliament, however, has agreed a proposal that would delay the obligations imposed in respect of high risk AI systems. The remaining provisions of the Act remain largely unaffected, and businesses should operate on that basis, noting that breaching these obligations can result in a fine of up to EUR 15 million or 3% of their total worldwide annual turnover for the preceding financial year (whichever is higher).

The Act raised a number of questions around how companies would comply with their transparency obligations. This led to the creation of a draft code of practice (the “Code of Practice on Marking and Labelling of AI-generated content” (the Code)), integrating feedback from hundreds of participants and observers including industry, academia and other stakeholders.

The Code of Practice on marking and labelling of AI-generated content

The second draft of the Code was published on 3 March 2026 and a final version is expected by June 2026. The Code is subject to further amendments, but sets out four key requirements to demonstrate compliance:

  1. multi-layered marking through metadata embedding, imperceptible watermarking, or fingerprinting/logging;
  2. providers having to offer a free interface or publicly available tool enabling users and third parties to verify whether content is AI-generated;
  3. technical solutions for marking and detection must be effective and reliable; and
  4. continuous testing and improvement to keep pace with real-world developments.

The transparency obligations

The Code is underpinned by the underlying transparency obligations in the Act.

The extent of these obligations is influenced by different factors such as whether the AI system is classified as limited or high risk; and whether you are a deployer or provider.

For limited risk AI systems:

If you are a provider

A ‘provider’ is a company, individual, public authority, agency or body that: (a) develops, or procures the development of an AI system or general-purpose AI model; and (b) places it on the market or puts it into service under its own name or trademark. In other words, this applies to those who set out to create, or procure the creation of an AI system.

Providers of limited risk AI systems must comply with three core transparency requirements.

  1. AI systems must be designed to inform individuals that they are engaging with an AI system;
  2. Providers must ensure that outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated; and
  3. Technical solutions employed must be effective, interoperable, robust and reliable.

The question of how providers can satisfy these requirements has been a recurring area of discussion, such that the European Commission has stepped in to provide guidance via the voluntary code of practice on the transparency of AI-generated content. We discuss this in further detail below.

If you are a deployer

In contrast, a ‘deployer’ is a company, individual, public authority, agency or body using an AI system under its authority, except where the AI system is used in a personal non-professional activity.

Given that deployers are effectively users with little to no control over the AI system, they are subject to much fewer disclosure requirements. The Act only imposes obligations on deployers of three specific types of AI systems:

  1. emotion recognition or biometric categorisation systems;
  2. deepfakes, where the system generates or manipulates image, audio or video content; or
  3. systems generating or manipulating text published to inform the public on matters of public interest.

For high risk AI systems:

If you are a provider

Unsurprisingly, the Act imposes the most obligations for this category. In general, it will include requirements for providers to supply instructions for safe use and information about accuracy, robustness, and cybersecurity. Individuals overseeing such systems must be suitably qualified to understand the system’s capacities and limitations, with various recordkeeping and risk management protocols.

If you are a deployer

Similar to above, deployers face fewer but a broader set of obligations reflective of the higher risk AI system. These include the implementation of specific governance, monitoring, transparency and impact assessment requirements. The key obligations can be grouped under two headings:

Operational obligations

The deployer must implement appropriate measures to ensure the high-risk AI system is used in accordance with the relevant instructions for use, that input data is relevant and sufficiently representative for the intended purpose of the system, and monitor its operation in order to be able to inform the provider in the event it identifies any risks or serious incidents.

Control and risk management obligations

A deployer must conduct a fundamental rights impact assessment (FRIA) before deploying the system, assign human oversight to individuals with the necessary competence, train and regularly monitor the AI system for risks, and keep the logs of the AI system in an automatic and documented manner for at least six months.

Future outlook

The trajectory is unmistakable: the Act positions transparency as a core principle, which is going to impact design choices, user interfaces and governance processes. Organisations will be expected to comply with the Code and the underlying transparency obligations that underpin it.

Companies leveraging AI along their supply chain should therefore prioritise embedding and documenting transparency measures that can withstand both regulatory and legal scrutiny, while ensuring alignment with wider IP governance and strategic commercial decisions.

For more information the EU AI Act and the Code and how they might impact your business, contact Sacha Wilson and Jacky Lai.

Managing risks and opportunities with AI

In a GC100 poll of 106 companies in September 2024, 8% of respondents reported they already regularly used Co-Pilot and Teams Premium for transcription of initial draft minutes.

Since then, there has been an influx of providers in the market that can prepare agendas, summarise discussions, and draft lists of action points. Before employing such AI tools in your company, it is essential to consider whether the use of AI is appropriate, and, if so, whether all the necessary risk-mitigation steps have been taken.

What are the key risks?

  • AI tools lack the ability to differentiate between different types of contexts and tones, cannot exercise discretion, and may fail to properly identify commercial nuances and stifle candid discussions.
  • Confidential and sensitive discussions may become disclosable in future litigation or regulatory contexts, as part of subject access requests, or during due diligence processes. AI tools may not be able to identify legally privileged information, which could lead to an inadvertent loss of privilege.
  • Without careful review, there is a risk AI would not deliver a formal record of decisions made that is accurate and impartial.
  • When using AI tools that rely on third-party providers, organisations face the risk of data breaches and confidential information leaks.

How can you mitigate these risks?

  • The most effective risk-mitigation measure is to ensure human review by an employee of an appropriate level. This ensures there is a clear, accurate, and concise outcome with careful consideration being given for any inclusion of commercially sensitive or legally privileged information.
  • Companies should conduct thorough due diligence on any AI providers, including understanding the extent to which the provider uses data inputted by users for AI development or training activities and any processes that can lead to a leak of confidential company information.
  • Companies should also ensure suppliers are compliant with all relevant data protection regulations and have adequate security systems in place to ensure their AI tools do not increase the company’s exposure to cyber-attacks/data breaches/leaks.
  • Organisations should establish robust consent and communication processes regarding the use of AI tools internally and externally. Companies should think carefully about having AI tools as a default setting and providing the opportunity/awareness around opting out of such use.

Key takeaway

Consider carefully whether the benefits of using AI tools outweighs the significant risks. If used, human oversight, comprehensive diligence on AI providers, and clear guidance is essential to protect organisations.

Model Behaviour: Stability AI’s model is not an “infringing copy”, but legality of AI training remains unresolved

In the recent judgment in Getty Images v Stability AI [2025] EWHC 2863 (Ch), the High Court considered whether the generative AI model Stable Diffusion infringed copyright in works owned by/licensed to Getty Images, and further whether the model outputs infringed Getty Images’ trade marks. Getty argued that millions of its images had been used without permission to train the Stable Diffusion model, and that the model itself was therefore an infringing copy of the works.

Crucially, the court was not considering whether copyright was infringed during the training process of Stable Diffusion, as those claims were not pursued to trial by Getty due to a lack of evidence of training having been taken place in the UK. Instead, the High Court decided on the much narrower issue of whether the trained Stable Diffusion model is itself an “infringing copy” of the copyright works trained on. If the model was an infringing copy, under secondary copyright infringement law, its import into the UK would have infringed Getty’s copyright, even though the model had not been trained in the UK.

The High Court’s decision came down to the way in which Stable Diffusion was trained, and the relationship between the model and its training data. Stable Diffusion is a diffusion model, meaning its model weights are numerical parameters learned from training, not stored or compressed copies of its training data. The model does not contain any of Getty’s copyright images in any form whatsoever – and never has done – even though it may have been exposed to them during training. Getty’s secondary copyright claim failed as a result.

Although Getty lost its secondary copyright infringement claim, this was a highly fact-specific decision which related to this model of Stable Diffusion only. The High Court stressed this in its decision. Although it may be true that an “AI model which does not store or reproduce any copyright works (and has never done so) is not an “infringing copy””, this leaves the door open for an AI model that does store or reproduce copyright works (or has done so at some point) being found to be an infringing copy of its training data. Other model architectures that retain or reproduce their training data verbatim – which is more common for text models than image models like Stable Diffusion – may still be deemed infringing copies. In addition, there is scope for argument on whether a more liberal interpretation of what is an infringing copy should be adopted: in circumstances where the model has extracted the value and intellectual creation of copyright works, and in a manner that was not envisaged when the legislation was passed, why is this not reproduction of the underlying intellectual creation?

Further, as Getty dropped its training claims at trial, the UK courts are yet to decide on whether the training of AI models using copyright works in the UK infringes copyright. That question will need to be decided in a future claim involving an AI model that was trained (or at least partially trained) in the UK. 

On the trade mark infringement claim, the court made a limited finding of trade mark infringement where early model versions of Stable Diffusion produced outputs with Getty-style watermarks.

If you’d like to speak to a member of the team about any of the issues raised by the judgment, please reach out to one of our AI experts.

The UK’s Data (Use and Access) Bill passes as Lords’ concede on a push for AI transparency to protect creative industries

On 11 June, the House of Lords debated amendments to the Data (Use and Access) Bill (the Bill) and marked the culmination of an extensive “ping-pong” process between the House of Lords and the House of Commons regarding the protections for copyright holders in the context of artificial intelligence (AI).

What was the debate about?

  • The Government’s commitment to protecting copyright holders remains but it argues it cannot act prematurely without completing consultations on the issue. Emphasising the importance of transparency, enforcement and remuneration, it insisted on following due process, which includes analysing over 11,500 consultation responses and establishing technical and parliamentary working groups.
  • Several Lords, including Baroness Kidron and Lord Berkeley of Knighton, expressed frustration at the Government’s inaction. They argued that immediate transparency measures are needed to protect copyright holders from exploitation by AI companies. The creative sector fears that AI systems are using copyrighted works without consent or compensation, which could undermine the livelihoods of artists, writers, musicians and others.

What happened?

In efforts to ensure transparency and incentivise AI developers to comply with copyright law Lord Berkeley of Knighton introduced a new amendment to the Bill requiring AI developers to disclose which copyrighted works they use for training and how they access them, unless a licence has been agreed with rights holders.

Lord Berkeley ultimately withdrew his amendment, citing a desire to maintain the dignity of the House and avoid further unnecessary divisions. However, he and others urged the Government to take the concerns of the creative industries seriously and act swiftly to address them.

What will happen next?

The Bill now awaits Royal Assent and once in force, it will reform elements of the UK GDPR and Privacy Electronic Communications Regulations – from introducing a list of recognised legitimate interests to adding new exceptions to the consent requirements for cookies and similar technologies.

It should be noted that while the UK’s adequacy decision from the EU to allow a free flow of personal data transfers has been extended to 27 December 2025, the Bill does introduce changes to the UK GDPR which ultimately leads to a departure from the EU GDPR. As such, we wait eagerly to see if it decided whether or not the UK’s data protection regime will continue to offer materially equivalent protections in order to maintain the free flow of transfers between the UK and EU.

If you would like more information, please feel free to reach out to one of our dedicated data protection lawyers, or if you would like keep up to date on the latest in data protection, please subscribe to our quarterly newsletter, The Data Download.