EU Artificial Intelligence Act

read time: 7 mins
18.12.23

1. Scope and current stage

The EU Artificial Intelligence Act (the “ Act”) is the first initiative seeking to regulate AI, and has moved another step closer to entering into force. 

On 8 December 2023, representatives from the European Parliament, the Council of the EU, and the European Commission agreed a provisional deal to pass the Act. This marks a significant milestone in a long journey since the proposed text was first introduced by the European Commission in April 2021. 

The Act is designed as horizontal EU legislation that applies to all AI systems placed on or used in the EU. The Act will work in conjunction with the other pillars of EU legislation to regulate AI systems, such as the Data Governance Act, Data Act, Digital Markets Act and Digital Services Act.

2. Key proposals

The final text of the Act will be published at a later date. In the meantime, we have summarised some key provisions based on the currently published text as of June 2023.

a.    Legal definition of AI: 

The Act proposes a technology-neutral definition of AI systems, largely based on a definition already used by the OECD: 

“Artificial intelligence system means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as content, predictions, recommendations, or decisions that influence physical or virtual environments.” (Article 3(1) – as per the compromise amendments dated 9 May 2023). 

The definition was one of the more contested issues in the Act, needing to strike a balance between providing a comprehensive definition that would encompass new developments, whilst also ensuring it was not so wide as to inadvertently capture software which would not generally be classified as ‘AI’. 

b.    Applicability: 

The regulations will apply to AI systems that are placed on the market, put into service or used in the EU, irrespective of whether the provider of the system is based within the EU or outside of the EU.

Exclusions apply to AI systems developed or used exclusively for military purposes, public authorities, international organisations, and authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation. The Act is expected to provide certain guardrails for law enforcement authorities’ use of AI for law enforcement purposes, in order to ensure protection of individuals’ fundamental rights. 

c.    Classification of AI systems based on the levels of risk: 

The Act sets out a risk-based classification approach, with varying obligations for providers based on the risks associated with each AI system. 

i.    Limited and minimal risk: AI systems at this level of risk (such as spam filters/video games) are subject to light transparency obligations;

ii.    High risk (autonomous vehicles, medical devices and critical infrastructure machinery): AI systems at this risk level are permitted, but in order to gain access to the EU market, developers and users are subject to rigorous testing, proper documentation of data quality and an accountability framework that details human oversight;

iii.    Unacceptable risk (government social scoring, real-time biometric identification systems in public spaces, predictive policing or cognitive behavioural manipulation): these AI systems are banned from the EU with little exception.

Further information regarding the regulation of AI systems of varying risk levels is set out below:

Level of risk Description Regulation
Unacceptable risk AI systems: 
  • that deploy harmful, manipulative “subliminal techniques”; or
  • that exploit specific vulnerable groups (physical or mental disability); or
  • used by public authorities, or on their behalf, for social scoring purposes; or 
  • that use “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes  (except in a limited number of cases).

Prohibited to place on the market, put into services or use in EU AI systems.

Exception: Facial recognition technology would be allowed (i) for targeted searches for potential victims of crime, including missing children; (ii) to prevent a specific, substantial and imminent threat to the life or physical safety or persons or of a terrorist attack, and (iii) for a detection, localisation, identification or prosecution of a perpetrator or individual suspected of a criminal offence referred to in the European Arrest Warrant Framework Decision.

High risk

2 categories of high-risk AI systems:

  • Systems used as a safety component of a product or falling under the EU health and safety harmonisation legislation (e.g. toys, aviation, cards, medical devices, lifts);

  • Systems deployed in 8 specific areas which could be updated through delegated acts:

  • Biometric identification and categorisation of natural persons;
  • Management and operation of critical infrastructure;
  • Education and vocational training;
  • Employment, worker management and access to self-employment;
  • Access to and enjoyment of essential private services and public services and benefits;
  • Law enforcement;
  • Migration, asylum and border control management;
  • Administration of justice and democratic processes. 

Ex-ante conformity assessment:
Providers are required to register their systems in an EU-wide database managed by the Commission before placing them on the market or putting them into service.

  • AI products/services governed by the existing product safety legislation (e.g. medical devices): the existing third-part conformity frameworks continue to apply; 
  • AI systems not governed by EU legislation: providers to conduct a self-assessment showing that they comply with the new requirements and can use CE marking;
  • AI systems used for biometric identification: a conformity assessment by a “notified body”

  • Other requirements: 
  • In respect of risk management, testing, technical robustness, data training and data governance, transparency, human oversight and cybersecurity;

  • Providers from outside the EU will require an authorised representative in the EU to, among other things, ensure the conformity assessment, establish a post-market monitoring system and take corrective action as needed;

  • AI systems that conform to the new harmonised EU standards would be presumed to be in conformity with the Act’s requirements.
Limited risk Systems that interact with humans (chatbots), or carry out emotion recognition, biometric categorisation, or generate or manipulate image, audio or video content. Transparency obligations.
Low or minimal risk Systems presenting only low or minimal risk (e.g. spam filters, phishing filters).
  • Could be developed and used in the EU without conforming to any additional legal obligations. 
  • Envisages creation of codes of conduct to encourage non-high risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems.

The Act also contains provisions to cover general purpose AI, which are AI systems that can be used for different purposes with varying degrees of risk (e.g. LLM and generative AI). 

It is also reported that specific rules have been agreed for foundation models, which are large systems capable of performing a wide range of distinctive tasks such as generating video, text, images, computing or generating computer code. These will be subject to transparency obligations before being placed on the EU market. “High impact” foundation models will be subject to a stricter regime.

d.    Non-compliance: 

Non-compliance with the Act and its obligations may attract significant penalties:
 
i.    Violations of banned AI applications will attract administrative fines of up to €35 million or 
      7% of global annual turnover

ii.    Violations of the obligations contained in the Act will attract fines of up to €15 million or
       3%; and

iii.    The supply of incorrect information will attract fines of up to €7.5 million or 1.5% of
        turnover

It is anticipated that there will be more proportionate limits on administrative fines for SMEs and start-ups.          

3.    Key take aways and next steps 

The Act embodies the EU’s ambition of establishing a precedent for the world’s first comprehensive set of regulations to govern AI.

The EU’s approach is different from the UK’s current principles-based approach to regulating AI (which utilises existing legal frameworks rather than creating a comprehensive set of AI regulations), China’s interim rules and the U.S.’ light touch approach. 

Looking forward, EU legislators will finalise the text of the Act, focusing on its scope, principles and key issues around copyright and how to regulate foundation models. The finalised version will then be ratified by the Committee of Permanent Representatives (a Coreper), expected to take place early next year. The Act will then be published in the official journal in order to become law. 

The Act is expected to become law in 2026. There will be a transition period for each member state to adopt the Act. Some obligations will become binding earlier than others; for example the ban on prohibited AI systems is expected to come into force within 6 months after the effective date of the Act. 

In the meantime, providers and users of AI systems should prepare for the Act to enter into force and ensure that they can comply with the Act’s principles and obligations moving forward. 

For more information visit our Technology & Digital Transformation page

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up