Ensuring transparency in the age of AI: how can businesses ensure compliance with the EU AI Act?

read time: 8 mins
24.02.25

This article sets out how to meet the transparency requirements under the EU AI Act, which are required to be put in place by August 2026. Businesses will need to comply if they use AI systems which interact directly with people, for example chatbots and digital assistants, or generates digital text or content which may be viewed by people in the EU. 

In this article we provide detail on the EU AI Act and who the transparency requirements will apply to. We also advise how businesses can ensure compliance, and what the potential consequences are for not providing the required transparency information.

What is the EU AI Act and why do UK businesses need to know about it? 

The EU AI Act came into force in the EU in August 2024 and which governs the development and use of AI in the EU. The EU AI Act takes a risk-based approach to regulation and different rules apply to different AI systems depending on the level of risk that the EU deems to be posed by an AI system. 

Despite Brexit, UK businesses will require a detailed understanding of the EU AI Act because, just like EU GDPR, it has extra-territorial scope and applicability. This means that it also regulates organisations based outside the EU where they place AI products on the market in the EU, and/or where outputs and services produced by AI applications are used within the EU. 

Given the severe penalties that can be imposed for breach of the EU AI Act , the maximum fine being the higher of 35 million euro or 7% of worldwide annual turnover, businesses that use or deploy AI in the EU will wish to ensure compliance.  

Who do the transparency requirements apply to?

The transparency requirements apply to 'providers' and 'deployers' of 'AI systems' that are intended to interact directly with people, or that create content viewed by people, in the EU. With respect to the meaning of these key terms:

  • An AI system is a system that operates with varying levels of autonomy, that may adapt after deployment, and that, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
  • A provider is a person that develops an AI system or AI model or that has an AI system developed and placed on the market or into service under its own name or trademark. For example, a company that develops an AI chatbot would likely be considered a provider. 
  • A deployer is a person using an AI system for professional activities. For example, a company who uses an AI system to deliver customer services support would likely be considered a deployer. 

What does 'transparency' mean? 

Recital 27 states that 'transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.' 

Who do the transparency requirements apply to?

The EU AI Act adopts a risk-based approach and puts AI systems into four categories with differing level-levels of risk:

  1. Prohibited risk - which are, as the name suggests, banned. 
  2. High risk - such AI systems are subject to broad requirements, in addition to the transparency requirements set out below.
  3. Limited risk - subject to the transparency requirements. 
  4. Minimal risk - the EU AI Act defines 'risk' as the 'the combination of the probability of an occurrence of harm and the severity of that harm.'

The transparency requirements apply to both 2) high risk and 3) limited risk AI systems, which are defined as follows:

  • High risk AI systems pose a high risk of harm to the health and safety or the fundamental rights of individuals, taking into account the severity of harm and the probability of harm occurring. For example, the EU AI Act lists AI systems that are used for remote biometric identification systems, safety components in critical infrastructure, and certain employment use cases. 
  • Limited risk AI systems perform autonomous tasks which involve direct interaction with people or create content which can be viewed by people, which would therefore include chatbots and AI assistants for customer service and productivity functions, but which do not qualify as high risk AI systems. 

Providers and deployers of high-risk AI systems are also subject to more onerous obligations, such as record keeping requirements and system monitoring, which are outside the scope of this article.

How do we comply with the transparency requirements for limited risk AI systems?

Different transparency rules apply depending on whether a business is a provider or a deployer, and also depending on how the AI system it develops or deploys interacts with people. 

However, where the transparency rules apply, transparency information must be provided in a clear and distinguishable manner and no later than someone’s first interaction or exposure to the AI system. This is a similar approach to that taken in GDPR requiring controllers to provide a privacy notice a user’s personal details are first processed.  

Providers

To ensure compliance with the transparency requirements under the EU AI Act, providers need to:

  • Where the AI system is intended to interact directly with humans, for example a chatbot or digital assistant, design and develop the AI system. This is to ensure that the people interacting with it are informed that they are interacting with AI unless this is obvious from the point of view of someone who is 'reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use', and information will need to be provided in a clear and easily identifiable way. Consideration must be given to the characteristics of individuals belonging to vulnerable groups, and ensure that more accessible information is available if required.
  • Where the AI system generates synthetic audio, image, video or text content, ensure that the outputs are marked in a machine-readable format and detectable as AI generated or manipulated. 

Deployers 

For deployers to remain compliant, where the AI system is:

  1. An emotion recognition system - an AI system for the purposes of identifying or inferring people’s emotions or intentions on the basis of biometric data, for example personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a person, such as facial images or a wearable smart watch, or
  2. A biometric categorisation system - an AI system for the purposes of assigning people to specific categories on the basis of their biometric data, unless it's ancillary to another commercial service and strictly necessary for objective technical reasons, for example, automated facial recognition

Deployers must inform individuals of this and process the data in accordance with data protection law. 

In addition to this, where the AI system generates or manipulates images, audio or video content constituting a deep fake, deployers will need to disclose the fact that the content has been AI generated or manipulated. 

Using an AI chatbot as an example, which are becoming increasingly used online, for customer service tasks for example, the obligation to inform users that they are interacting with a chatbot rests with the chatbot provider and developer, rather than the deployer or licensee. 

Upcoming codes of practice - clarifying transparency information when using AI

The AI Office, which supports the development and use of trustworthy AI in the EU, has been tasked with producing codes of practice to clarify the transparency information that needs to be included when using AI systems. We expect that this guidance will be based on the guidance around transparency in the GDPR context. For example, the UK ICO has detailed guidance on 'explaining decisions made with AI', and some companies have already been developing AI explainability statements to inform users of how AI is used in their products and services. 

When do we need to provide the transparency information? 

The transparency information must be provided no later than someone’s first use of, or exposure to, the content, and the transparency information itself must be clear and easily identifiable. 
The deadline for putting the transparency in place is 2 August 2026, which is the date that the majority of the provisions of the act come into force. 

What are the potential consequences for not providing transparency information?

Whilst the highest fines, the higher of 35 million euros or 7% of global annual turnover, are reserved for serious breaches of the EU AI act, significant fines could still apply for failure to provide transparency information. Specifically, fines of the higher of 7.5 million euros and 1% of global annual turnover apply for supplying incorrect, incomplete or misleading information required to be provided under the act. This would include the transparency information required to be provided for high-risk and limited-risk AI systems. 

For further information on how to remain compliant with the upcoming EU AI Act, please contact our commercial team.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up