The European Commission (EC) proposes ‘the first-ever legal framework on AI’ governing the placement of AI on the market, putting AI into service and general post-market monitoring obligations – the AI Regulation.

The AI Regulation prohibits the use of certain AI practices, setting out a framework for ‘high-risk AI systems’ and their relevant requirements and obligations.

Generally, the AI Regulation places obligation on various players in the AI landscape and lays out a structure for governance and enforcement, in a format largely similar to those imposed by the EU GDPR (GDPR) in respect of personal data.

What is an ‘AI system’?

The notion of an AI system is broadly defined, capturing software that is developed with one or more of the techniques and approaches listed within Annex I of the AI Regulation, including machine learning approaches, logic and knowledge-based approaches and statistical approaches that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Who does the AI Regulation apply to?

The AI Regulation is proposed to have extra-territorial scope and to apply to a number of players involved in the AI supply chain, including:

  • providers placing on the market or putting into service AI systems in the EU, irrespective of where such a provider is established;
  • users of AI systems located within the EU; and
  • providers and users of AI systems that are located outside the EU, where the output produced by the system is used in the EU.

Manufacturers, importers and distributors

of AI systems are also caught by the AI Regulation and each has its own specific obligations to comply with.

This is largely similar to the scope of the GDPR in respect of the processing of personal data; extra-territorial in nature and applying to various entities involved at different stages of processing.

Prohibited practices

The EC proposes prohibiting a number of practices including placing on the market, putting into service or using an AI system that does one of the following:

  • deploys subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes that person or another person physical or psychological harm;
  • exploits vulnerabilities of a specific group of persons due to their age, physical or mental disability to materially distort the behaviour of a person pertaining to the group in a manner that causes that person or another person physical or psychological harm;
  • is used by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons with the social score leading to detrimental or unfavourable treatment that is either unrelated to the contexts in which the data was originally generated or unjustified or disproportionate; or
  • uses ‘real-time’ remote biometric identification systems in public spaces for the purpose of law enforcement, unless the use falls within one of the exceptions and is subject to additional requirements laid out in the AI Regulation.

What might be considered a high-risk AI system?

The AI Regulation sets out criteria to determine whether an AI system could be classed as high risk and gives examples of areas in which AI systems are used which would classify such use as high risk. For example, AI systems used for the management and operation of critical infrastructure, in the areas of education, vocational training, employment, law enforcement, migration, border control and the administration of justice or democratic processes could all be considered high risk.

What are the implications for these high-risk AI systems?

The AI Regulation proposes various requirements that must be met before high-risk AI systems can proceed to market, including:

(i) the implementation of a risk management system;

(ii) data and data governance requirements relating to training, validation and testing data;

(iii) strict technical documentation and record keeping requirements;

(iv) requirements relating to transparency and information to users; and

(v) the requirements of a high level of robustness, cybersecurity and accuracy.

The AI Regulation also outlines the role of human oversight.

The AI Regulation further proposes to impose post-market monitoring and information sharing obligations.

Penalties

The AI Regulation proposes various penalties for different breaches of the AI Regulation, in a similar manner to the GDPR. The highest penalty under the AI Regulation is a penalty of up to €30m or 6% of the total worldwide annual turnover (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data governance. Penalties of up to €20m or 4% of the total worldwide annual turnover can be issued for non-compliance with any of the other requirements or obligations of the AI Regulation and up to €10m or 2% of the total worldwide annual turnover for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

Governance and enforcement

The AI Regulation proposes that Member States be required to appoint national supervisory authorities to perform supervisory and enforcement roles, similar to the way data protection authorities work under the GDPR. The EC also proposes the creation of the European AI Board, similar to the GDPR’s European Data Protection Board.

What does the AI Regulation mean for those working in the AI space and when can we expect the AI Regulation to come into effect?

Providers, manufacturers, importers, distributors and users of AI systems may soon find themselves heavily regulated, in a space that previously had little to no regulation. Design, development and procurement of AI will be largely affected, not least because the AI Regulation proposes to ban certain AI practices but also imposes strict and extensive obligations on those involved in high-risk AI systems, creating various regulatory hurdles pre and post market placement.

It is likely that there is still a long way to go before the AI Regulation comes into full force and effect. The AI Regulation will continue through the EU processes, passing through the European Parliament and various Member States for further consideration.