EU AI Act Transparency Obligations: latest developments and key obligations

A core requirement imposed by the EU AI Act (the Act) is in respect of transparency obligations for the AI systems used.

The majority of the Act is expected to come into force on 2 August 2026. The European Parliament, however, has agreed a proposal that would delay the obligations imposed in respect of high risk AI systems. The remaining provisions of the Act remain largely unaffected, and businesses should operate on that basis, noting that breaching these obligations can result in a fine of up to EUR 15 million or 3% of their total worldwide annual turnover for the preceding financial year (whichever is higher).

The Act raised a number of questions around how companies would comply with their transparency obligations. This led to the creation of a draft code of practice (the “Code of Practice on Marking and Labelling of AI-generated content” (the Code)), integrating feedback from hundreds of participants and observers including industry, academia and other stakeholders.

The Code of Practice on marking and labelling of AI-generated content

The second draft of the Code was published on 3 March 2026 and a final version is expected by June 2026. The Code is subject to further amendments, but sets out four key requirements to demonstrate compliance:

  1. multi-layered marking through metadata embedding, imperceptible watermarking, or fingerprinting/logging;
  2. providers having to offer a free interface or publicly available tool enabling users and third parties to verify whether content is AI-generated;
  3. technical solutions for marking and detection must be effective and reliable; and
  4. continuous testing and improvement to keep pace with real-world developments.

The transparency obligations

The Code is underpinned by the underlying transparency obligations in the Act.

The extent of these obligations is influenced by different factors such as whether the AI system is classified as limited or high risk; and whether you are a deployer or provider.

For limited risk AI systems:

If you are a provider

A ‘provider’ is a company, individual, public authority, agency or body that: (a) develops, or procures the development of an AI system or general-purpose AI model; and (b) places it on the market or puts it into service under its own name or trademark. In other words, this applies to those who set out to create, or procure the creation of an AI system.

Providers of limited risk AI systems must comply with three core transparency requirements.

  1. AI systems must be designed to inform individuals that they are engaging with an AI system;
  2. Providers must ensure that outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated; and
  3. Technical solutions employed must be effective, interoperable, robust and reliable.

The question of how providers can satisfy these requirements has been a recurring area of discussion, such that the European Commission has stepped in to provide guidance via the voluntary code of practice on the transparency of AI-generated content. We discuss this in further detail below.

If you are a deployer

In contrast, a ‘deployer’ is a company, individual, public authority, agency or body using an AI system under its authority, except where the AI system is used in a personal non-professional activity.

Given that deployers are effectively users with little to no control over the AI system, they are subject to much fewer disclosure requirements. The Act only imposes obligations on deployers of three specific types of AI systems:

  1. emotion recognition or biometric categorisation systems;
  2. deepfakes, where the system generates or manipulates image, audio or video content; or
  3. systems generating or manipulating text published to inform the public on matters of public interest.

For high risk AI systems:

If you are a provider

Unsurprisingly, the Act imposes the most obligations for this category. In general, it will include requirements for providers to supply instructions for safe use and information about accuracy, robustness, and cybersecurity. Individuals overseeing such systems must be suitably qualified to understand the system’s capacities and limitations, with various recordkeeping and risk management protocols.

If you are a deployer

Similar to above, deployers face fewer but a broader set of obligations reflective of the higher risk AI system. These include the implementation of specific governance, monitoring, transparency and impact assessment requirements. The key obligations can be grouped under two headings:

Operational obligations

The deployer must implement appropriate measures to ensure the high-risk AI system is used in accordance with the relevant instructions for use, that input data is relevant and sufficiently representative for the intended purpose of the system, and monitor its operation in order to be able to inform the provider in the event it identifies any risks or serious incidents.

Control and risk management obligations

A deployer must conduct a fundamental rights impact assessment (FRIA) before deploying the system, assign human oversight to individuals with the necessary competence, train and regularly monitor the AI system for risks, and keep the logs of the AI system in an automatic and documented manner for at least six months.

Future outlook

The trajectory is unmistakable: the Act positions transparency as a core principle, which is going to impact design choices, user interfaces and governance processes. Organisations will be expected to comply with the Code and the underlying transparency obligations that underpin it.

Companies leveraging AI along their supply chain should therefore prioritise embedding and documenting transparency measures that can withstand both regulatory and legal scrutiny, while ensuring alignment with wider IP governance and strategic commercial decisions.

For more information the EU AI Act and the Code and how they might impact your business, contact Sacha Wilson and Jacky Lai.

Unfair contract terms in consumer contracts: new draft guidance from the CMA

If you deal with consumers, then you need to know how consumer law applies to your contract terms and notices.

Ten years on from the introduction of the Consumer Rights Act 2015 (the CRA), the Competition and Markets Authority (the CMA) is revising its current guidance on unfair contract terms.

The draft guidance is aimed at making the guidance more accessible, helping businesses better understand and comply with the CRA. The consultation closed on 19 March 2026. Once finalised, it will replace the existing guidance on unfair contract terms.

Which terms are unfair?

Contract terms are unfair if they tilt the rights and responsibilities excessively in favour of the supplier. The law currently uses a ‘fairness test’ by looking at the words in the contract, taking into consideration what is being sold, how a term relates to other terms in the contract, and all the circumstances at the time the term was agreed.

Certain terms and notices giving rise to particular concerns are ‘blacklisted’ and deemed as unsuitable for use with consumers. These include terms that exclude or restrict liability for death or personal injury resulting from negligence, a consumer’s statutory rights and any associated remedies. Blacklisted terms are never enforceable against a consumer.

What are the key changes in the draft guidance?

Enhanced CMA enforcement powers under the DMCC:

The updated guidance integrates the Digital Markets, Competition and Consumer Act 2024 (the DMCC), enabling the CMA to impose penalties without going to court for businesses that use prohibited, non-transparent or unfair terms or notices. Fines may be up to 10% of a company’s global turnover or £300,000 (whichever is higher).

Transparency – more than words:

Transparency now covers not just the content itself, but also its presentation by requiring clear fonts and headings that follow a logical structure, supported by explanation of terms which may be complex or challenging to understand.

Fairness and consumer behaviour:

The requirement of ‘good faith’ should include a behavioural dimension. Suppliers must consider consumer psychology and avoid exploiting consumer biases — for instance, consumers’ tendency not to read standard terms thoroughly, or to underestimate future costs such as renewal or termination fees. Campaigns emphasising quick benefits, such as a free trial, while using tactics to minimise attention as to future costs will face greater scrutiny. Automatic renewal of subscriptions are also specifically noted as an area of concern, with the DMCC’s new subscription provisions (to enter into force no later than August 2026) adding further obligations.

The role of advertising:

Advertising is explicitly incorporated into the fairness assessment, requiring consistency between terms and marketing claims. Small print which removes or curtails more prominent claims, failing to highlight key terms during the marketing process, or inconsistency between marketing claims and the contract terms could give rise to an unfair commercial practices. Statements made by a supplier that a consumer is likely to see may also be treated as terms of the contract.

Exclusions and variations to the contract:

Vague language such as “liability is excluded so far as the law permits” will not remedy an unfair clause; and terms allowing a supplier to vary terms such as changing the description or price of the services or goods may now be deemed unfair should they be overly wide in scope or result in changes that may be unexpected to the customer.

What are the key takeaways for consumer businesses?

The draft guidance makes clear that unfair, onerous or significantly unbalanced terms will be closely scrutinised. Suppliers should ensure that lines of communication with customers are clear, transparent and user-friendly to understand.

Contract terms should similarly be reviewed to make sure that they strike a reasonable balance without prejudicing consumers by including reasonable protections around cancellation or refund rights.

For more information on how the new guidance will impact your consumer contracts, contact Sacha Wilson and Jacky Lai.

Government IT contracts: how to challenge the procurement process

If your business enters into contracts with public sector entities for the provision of IT or related services, you will be familiar with the public sector tender and procurement processes. But are you familiar with what can be done to challenge the outcome of those processes?

Whether it is an issue with the application of the scoring criteria, or how the process has been conducted, your business may have the ability to challenge contract awards.

However, in order to do so effectively, your business will need to move quickly and ensure that it deploys the various legal tools available to it strategically.

What is the relevant legislation?

In 2025, the Procurement Act 2023 (the Act) came into force. This represented the most significant development to UK public procurement laws for over 30 years, replacing the well-established EU-founded regime under the Public Contracts Regulations 2015 (the PCR).

How long do you have to bring a claim?

The period during which a legal claim can be brought under the Act is very short and remains largely unchanged from the PCR. In summary:

  • If you are a supplier seeking to challenge an award, the period to bring a claim is just 30 days from when they knew, or ought reasonably to have known, of the circumstances giving rise to the claim. However, this may be extended for up to three months where the court considers there is a good reason to do so.
  • If you are supplier seeking to set aside a contract that has been entered into, the period to bring a claim is 30 days from the date it knew or ought to have known of the circumstances giving rise to a claim with a long stop date of 6 months from the date the contract was entered into.

However, the parties can enter into a standstill agreement which, in effect, extends the limitation period, allowing the parties an opportunity to resolve the dispute.

Can you prevent the authority from entering into a contract with another supplier whilst you challenge the decision?

Under the previous regime, contracting authorities were required to observe a 10-day waiting period following the issue of a ‘standstill letter’ to all tendering suppliers before entering into a contract with the preferred supplier. Claims issued prior to contract execution would trigger an automatic suspension of the procurement process.

The Act reduces the standstill period from 10 to eight working days, with the period now triggered by the contract award notice instead of the issue of a standstill letter. Claimants are no longer entitled to the benefit of automatic suspension up until the date of contract execution. This is a significant shift from the previous position and impacts upon strategic considerations.

What information do you have about the decision-making process?

There are various ways you can find out more about the decision-making process. One of them is that contracting authorities must publish a Contract Award Notice on a central digital platform, and an assessment summary to each supplier that submitted an assessed tender.

The assessment summary must include: (a) the scores awarded for each criterion; (b) an explanation of those scores; and (c) in respect of unsuccessful suppliers, the reasons why the contract was not awarded to them, together with the corresponding information at (a) and (b) for the successful tender.

The enhanced disclosure requirements are a positive development for suppliers looking for substantive grounds on which to base a potential challenge.

What remedies can you obtain when challenging an award?

In many cases, compromise solutions are reached with the relevant authority without a claim needing to be issued. However, if you do pursue a claim, the remedies available remain mostly unchanged from the previous regime. There are two main categories:

Pre-contractual remedies:

Where a contract has been awarded but not yet executed, a successful challenge may result in the court granting one of the following orders:

  • an order setting aside the relevant decision or action (including the decision to award the contract);
  • an order requiring the contracting authority to take specified action (such as reconsidering a decision previously made);
  • an order for damages (which may be granted in addition to any other order, and has historically encompassed lost profits arising from the breach and/or wasted bid costs); or
  • such other order as the court considers appropriate.

Post-contractual remedies:

Where the awarded contract has been executed, the available remedies are limited to damages and/or an order setting aside the contract (subject to certain conditions in the Act).

What does this mean for suppliers?

If you are concerned about a procurement decision, then given the short timeframes for challenge, it is critical to seek legal advice at the earliest possible opportunity to allow your advisors time to evaluate the claim and devise and deploy the optimum strategy.

The Act’s emphasis on transparency, creating a level playing field and the introduction of new obligations on contracting authorities, expands the scope for potential challenges.

You will however need to navigate the reduced standstill period, which now runs for 8 working days from the contract award notice, and the fact that automatic suspension is no longer available until the date of contract execution.

If you would like to find out more about how to make procurement challenges, contact Lizzie Williams and Jacky Lai.

Model Behaviour: Stability AI’s model is not an “infringing copy”, but legality of AI training remains unresolved

In the recent judgment in Getty Images v Stability AI [2025] EWHC 2863 (Ch), the High Court considered whether the generative AI model Stable Diffusion infringed copyright in works owned by/licensed to Getty Images, and further whether the model outputs infringed Getty Images’ trade marks. Getty argued that millions of its images had been used without permission to train the Stable Diffusion model, and that the model itself was therefore an infringing copy of the works.

Crucially, the court was not considering whether copyright was infringed during the training process of Stable Diffusion, as those claims were not pursued to trial by Getty due to a lack of evidence of training having been taken place in the UK. Instead, the High Court decided on the much narrower issue of whether the trained Stable Diffusion model is itself an “infringing copy” of the copyright works trained on. If the model was an infringing copy, under secondary copyright infringement law, its import into the UK would have infringed Getty’s copyright, even though the model had not been trained in the UK.

The High Court’s decision came down to the way in which Stable Diffusion was trained, and the relationship between the model and its training data. Stable Diffusion is a diffusion model, meaning its model weights are numerical parameters learned from training, not stored or compressed copies of its training data. The model does not contain any of Getty’s copyright images in any form whatsoever – and never has done – even though it may have been exposed to them during training. Getty’s secondary copyright claim failed as a result.

Although Getty lost its secondary copyright infringement claim, this was a highly fact-specific decision which related to this model of Stable Diffusion only. The High Court stressed this in its decision. Although it may be true that an “AI model which does not store or reproduce any copyright works (and has never done so) is not an “infringing copy””, this leaves the door open for an AI model that does store or reproduce copyright works (or has done so at some point) being found to be an infringing copy of its training data. Other model architectures that retain or reproduce their training data verbatim – which is more common for text models than image models like Stable Diffusion – may still be deemed infringing copies. In addition, there is scope for argument on whether a more liberal interpretation of what is an infringing copy should be adopted: in circumstances where the model has extracted the value and intellectual creation of copyright works, and in a manner that was not envisaged when the legislation was passed, why is this not reproduction of the underlying intellectual creation?

Further, as Getty dropped its training claims at trial, the UK courts are yet to decide on whether the training of AI models using copyright works in the UK infringes copyright. That question will need to be decided in a future claim involving an AI model that was trained (or at least partially trained) in the UK. 

On the trade mark infringement claim, the court made a limited finding of trade mark infringement where early model versions of Stable Diffusion produced outputs with Getty-style watermarks.

If you’d like to speak to a member of the team about any of the issues raised by the judgment, please reach out to one of our AI experts.