This article sets out how to meet the transparency requirements under the EU AI Act, which are required to be put in place by August 2026. Businesses will need to comply if they use AI systems which interact directly with people, for example chatbots and digital assistants, or generates digital text or content which may be viewed by people in the EU.
In this article we provide detail on the EU AI Act and who the transparency requirements will apply to. We also advise how businesses can ensure compliance, and what the potential consequences are for not providing the required transparency information.
The EU AI Act came into force in the EU in August 2024 and which governs the development and use of AI in the EU. The EU AI Act takes a risk-based approach to regulation and different rules apply to different AI systems depending on the level of risk that the EU deems to be posed by an AI system.
Despite Brexit, UK businesses will require a detailed understanding of the EU AI Act because, just like EU GDPR, it has extra-territorial scope and applicability. This means that it also regulates organisations based outside the EU where they place AI products on the market in the EU, and/or where outputs and services produced by AI applications are used within the EU.
Given the severe penalties that can be imposed for breach of the EU AI Act , the maximum fine being the higher of 35 million euro or 7% of worldwide annual turnover, businesses that use or deploy AI in the EU will wish to ensure compliance.
The transparency requirements apply to 'providers' and 'deployers' of 'AI systems' that are intended to interact directly with people, or that create content viewed by people, in the EU. With respect to the meaning of these key terms:
Recital 27 states that 'transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.'
The EU AI Act adopts a risk-based approach and puts AI systems into four categories with differing level-levels of risk:
The transparency requirements apply to both 2) high risk and 3) limited risk AI systems, which are defined as follows:
Providers and deployers of high-risk AI systems are also subject to more onerous obligations, such as record keeping requirements and system monitoring, which are outside the scope of this article.
Different transparency rules apply depending on whether a business is a provider or a deployer, and also depending on how the AI system it develops or deploys interacts with people.
However, where the transparency rules apply, transparency information must be provided in a clear and distinguishable manner and no later than someone’s first interaction or exposure to the AI system. This is a similar approach to that taken in GDPR requiring controllers to provide a privacy notice a user’s personal details are first processed.
Providers
To ensure compliance with the transparency requirements under the EU AI Act, providers need to:
Deployers
For deployers to remain compliant, where the AI system is:
Deployers must inform individuals of this and process the data in accordance with data protection law.
In addition to this, where the AI system generates or manipulates images, audio or video content constituting a deep fake, deployers will need to disclose the fact that the content has been AI generated or manipulated.
Using an AI chatbot as an example, which are becoming increasingly used online, for customer service tasks for example, the obligation to inform users that they are interacting with a chatbot rests with the chatbot provider and developer, rather than the deployer or licensee.
The AI Office, which supports the development and use of trustworthy AI in the EU, has been tasked with producing codes of practice to clarify the transparency information that needs to be included when using AI systems. We expect that this guidance will be based on the guidance around transparency in the GDPR context. For example, the UK ICO has detailed guidance on 'explaining decisions made with AI', and some companies have already been developing AI explainability statements to inform users of how AI is used in their products and services.
The transparency information must be provided no later than someone’s first use of, or exposure to, the content, and the transparency information itself must be clear and easily identifiable.
The deadline for putting the transparency in place is 2 August 2026, which is the date that the majority of the provisions of the act come into force.
Whilst the highest fines, the higher of 35 million euros or 7% of global annual turnover, are reserved for serious breaches of the EU AI act, significant fines could still apply for failure to provide transparency information. Specifically, fines of the higher of 7.5 million euros and 1% of global annual turnover apply for supplying incorrect, incomplete or misleading information required to be provided under the act. This would include the transparency information required to be provided for high-risk and limited-risk AI systems.
For further information on how to remain compliant with the upcoming EU AI Act, please contact our commercial team.