The EU Artificial Intelligence Act (the “ Act”) is the first initiative seeking to regulate AI, and has moved another step closer to entering into force.
On 8 December 2023, representatives from the European Parliament, the Council of the EU, and the European Commission agreed a provisional deal to pass the Act. This marks a significant milestone in a long journey since the proposed text was first introduced by the European Commission in April 2021.
The Act is designed as horizontal EU legislation that applies to all AI systems placed on or used in the EU. The Act will work in conjunction with the other pillars of EU legislation to regulate AI systems, such as the Data Governance Act, Data Act, Digital Markets Act and Digital Services Act.
The final text of the Act will be published at a later date. In the meantime, we have summarised some key provisions based on the currently published text as of June 2023.
a. Legal definition of AI:
The Act proposes a technology-neutral definition of AI systems, largely based on a definition already used by the OECD:
“Artificial intelligence system means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as content, predictions, recommendations, or decisions that influence physical or virtual environments.” (Article 3(1) – as per the compromise amendments dated 9 May 2023).
The definition was one of the more contested issues in the Act, needing to strike a balance between providing a comprehensive definition that would encompass new developments, whilst also ensuring it was not so wide as to inadvertently capture software which would not generally be classified as ‘AI’.
b. Applicability:
The regulations will apply to AI systems that are placed on the market, put into service or used in the EU, irrespective of whether the provider of the system is based within the EU or outside of the EU.
Exclusions apply to AI systems developed or used exclusively for military purposes, public authorities, international organisations, and authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation. The Act is expected to provide certain guardrails for law enforcement authorities’ use of AI for law enforcement purposes, in order to ensure protection of individuals’ fundamental rights.
c. Classification of AI systems based on the levels of risk:
The Act sets out a risk-based classification approach, with varying obligations for providers based on the risks associated with each AI system.
i. Limited and minimal risk: AI systems at this level of risk (such as spam filters/video games) are subject to light transparency obligations;
ii. High risk (autonomous vehicles, medical devices and critical infrastructure machinery): AI systems at this risk level are permitted, but in order to gain access to the EU market, developers and users are subject to rigorous testing, proper documentation of data quality and an accountability framework that details human oversight;
iii. Unacceptable risk (government social scoring, real-time biometric identification systems in public spaces, predictive policing or cognitive behavioural manipulation): these AI systems are banned from the EU with little exception.
Further information regarding the regulation of AI systems of varying risk levels is set out below:
Level of risk | Description | Regulation |
Unacceptable risk | AI systems:
|
Prohibited to place on the market, put into services or use in EU AI systems. Exception: Facial recognition technology would be allowed (i) for targeted searches for potential victims of crime, including missing children; (ii) to prevent a specific, substantial and imminent threat to the life or physical safety or persons or of a terrorist attack, and (iii) for a detection, localisation, identification or prosecution of a perpetrator or individual suspected of a criminal offence referred to in the European Arrest Warrant Framework Decision. |
High risk |
2 categories of high-risk AI systems:
|
Ex-ante conformity assessment:
|
Limited risk | Systems that interact with humans (chatbots), or carry out emotion recognition, biometric categorisation, or generate or manipulate image, audio or video content. | Transparency obligations. |
Low or minimal risk | Systems presenting only low or minimal risk (e.g. spam filters, phishing filters). |
|
The Act also contains provisions to cover general purpose AI, which are AI systems that can be used for different purposes with varying degrees of risk (e.g. LLM and generative AI).
It is also reported that specific rules have been agreed for foundation models, which are large systems capable of performing a wide range of distinctive tasks such as generating video, text, images, computing or generating computer code. These will be subject to transparency obligations before being placed on the EU market. “High impact” foundation models will be subject to a stricter regime.
d. Non-compliance:
Non-compliance with the Act and its obligations may attract significant penalties:
i. Violations of banned AI applications will attract administrative fines of up to €35 million or
7% of global annual turnover
ii. Violations of the obligations contained in the Act will attract fines of up to €15 million or
3%; and
iii. The supply of incorrect information will attract fines of up to €7.5 million or 1.5% of
turnover
It is anticipated that there will be more proportionate limits on administrative fines for SMEs and start-ups.
The Act embodies the EU’s ambition of establishing a precedent for the world’s first comprehensive set of regulations to govern AI.
The EU’s approach is different from the UK’s current principles-based approach to regulating AI (which utilises existing legal frameworks rather than creating a comprehensive set of AI regulations), China’s interim rules and the U.S.’ light touch approach.
Looking forward, EU legislators will finalise the text of the Act, focusing on its scope, principles and key issues around copyright and how to regulate foundation models. The finalised version will then be ratified by the Committee of Permanent Representatives (a Coreper), expected to take place early next year. The Act will then be published in the official journal in order to become law.
The Act is expected to become law in 2026. There will be a transition period for each member state to adopt the Act. Some obligations will become binding earlier than others; for example the ban on prohibited AI systems is expected to come into force within 6 months after the effective date of the Act.
In the meantime, providers and users of AI systems should prepare for the Act to enter into force and ensure that they can comply with the Act’s principles and obligations moving forward.
For more information visit our Technology & Digital Transformation page.
We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.
Sign up