EU AI Act Transparency Obligations: latest developments and key obligations

A core requirement imposed by the EU AI Act (the Act) is in respect of transparency obligations for the AI systems used.

The majority of the Act is expected to come into force on 2 August 2026. The European Parliament, however, has agreed a proposal that would delay the obligations imposed in respect of high risk AI systems. The remaining provisions of the Act remain largely unaffected, and businesses should operate on that basis, noting that breaching these obligations can result in a fine of up to EUR 15 million or 3% of their total worldwide annual turnover for the preceding financial year (whichever is higher).

The Act raised a number of questions around how companies would comply with their transparency obligations. This led to the creation of a draft code of practice (the “Code of Practice on Marking and Labelling of AI-generated content” (the Code)), integrating feedback from hundreds of participants and observers including industry, academia and other stakeholders.

The Code of Practice on marking and labelling of AI-generated content

The second draft of the Code was published on 3 March 2026 and a final version is expected by June 2026. The Code is subject to further amendments, but sets out four key requirements to demonstrate compliance:

  1. multi-layered marking through metadata embedding, imperceptible watermarking, or fingerprinting/logging;
  2. providers having to offer a free interface or publicly available tool enabling users and third parties to verify whether content is AI-generated;
  3. technical solutions for marking and detection must be effective and reliable; and
  4. continuous testing and improvement to keep pace with real-world developments.

The transparency obligations

The Code is underpinned by the underlying transparency obligations in the Act.

The extent of these obligations is influenced by different factors such as whether the AI system is classified as limited or high risk; and whether you are a deployer or provider.

For limited risk AI systems:

If you are a provider

A ‘provider’ is a company, individual, public authority, agency or body that: (a) develops, or procures the development of an AI system or general-purpose AI model; and (b) places it on the market or puts it into service under its own name or trademark. In other words, this applies to those who set out to create, or procure the creation of an AI system.

Providers of limited risk AI systems must comply with three core transparency requirements.

  1. AI systems must be designed to inform individuals that they are engaging with an AI system;
  2. Providers must ensure that outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated; and
  3. Technical solutions employed must be effective, interoperable, robust and reliable.

The question of how providers can satisfy these requirements has been a recurring area of discussion, such that the European Commission has stepped in to provide guidance via the voluntary code of practice on the transparency of AI-generated content. We discuss this in further detail below.

If you are a deployer

In contrast, a ‘deployer’ is a company, individual, public authority, agency or body using an AI system under its authority, except where the AI system is used in a personal non-professional activity.

Given that deployers are effectively users with little to no control over the AI system, they are subject to much fewer disclosure requirements. The Act only imposes obligations on deployers of three specific types of AI systems:

  1. emotion recognition or biometric categorisation systems;
  2. deepfakes, where the system generates or manipulates image, audio or video content; or
  3. systems generating or manipulating text published to inform the public on matters of public interest.

For high risk AI systems:

If you are a provider

Unsurprisingly, the Act imposes the most obligations for this category. In general, it will include requirements for providers to supply instructions for safe use and information about accuracy, robustness, and cybersecurity. Individuals overseeing such systems must be suitably qualified to understand the system’s capacities and limitations, with various recordkeeping and risk management protocols.

If you are a deployer

Similar to above, deployers face fewer but a broader set of obligations reflective of the higher risk AI system. These include the implementation of specific governance, monitoring, transparency and impact assessment requirements. The key obligations can be grouped under two headings:

Operational obligations

The deployer must implement appropriate measures to ensure the high-risk AI system is used in accordance with the relevant instructions for use, that input data is relevant and sufficiently representative for the intended purpose of the system, and monitor its operation in order to be able to inform the provider in the event it identifies any risks or serious incidents.

Control and risk management obligations

A deployer must conduct a fundamental rights impact assessment (FRIA) before deploying the system, assign human oversight to individuals with the necessary competence, train and regularly monitor the AI system for risks, and keep the logs of the AI system in an automatic and documented manner for at least six months.

Future outlook

The trajectory is unmistakable: the Act positions transparency as a core principle, which is going to impact design choices, user interfaces and governance processes. Organisations will be expected to comply with the Code and the underlying transparency obligations that underpin it.

Companies leveraging AI along their supply chain should therefore prioritise embedding and documenting transparency measures that can withstand both regulatory and legal scrutiny, while ensuring alignment with wider IP governance and strategic commercial decisions.

For more information the EU AI Act and the Code and how they might impact your business, contact Sacha Wilson and Jacky Lai.