What effect will the EU Artificial Intelligence Act have on the healthcare, digital health and life sciences sector?

read time: 9 mins
20.02.24

The European Parliament voted to pass the EU Artificial Intelligence Act on 13 March 2024. Together with the other pillars of EU legislation (Data Governance Act, Data Act, Digital Markets Act and Digital Services Act) the act introduces a comprehensive set of rules to regulate AI in the EU and thus will have a significant effect on the healthcare, digital health and life science sector.

What are the key issues for the sector? 

Once the final formality to adopt the act is completed, the act is expected to enter into force later this year and most of its rules will be applied 24 months afterwards. However, some obligations will become binding earlier than others. For example, after the effective date of the act the ban on prohibited AI systems, eg using biometric data to categorise people or using harmful ‘subliminal techniques’, is expected to come into force within 6 months and the provisions on general purpose AI and penalties will be applied within 12 months. The obligations on high-risk AI will have a longer compliance deadline of 36 months from the effective date. 

As highlighted in more detail below, AI systems that are deployed in areas such as medical aids, healthcare services, emergency healthcare patient triage systems or emotion recognition are likely to be classified, at least, as high-risk systems. Therefore, organisations in the healthcare, digital health and life science sector would need to be mindful that these systems are subject to stricter regulations and requirements under the act. That being said, a regulatory sandbox will be established to encourage innovation by allowing for the development, training and testing of AI systems (including processing personal data for such purposes) where the systems are beneficial to safeguard substantial public interest in the permitted areas (e.g. public safety and public health, disease detection, diagnosis prevention, improvement of health care systems, or protection or biodiversity etc.). 

Even though the act will not apply directly to the UK, due to the global nature of AI technology, it is anticipated that AI system providers will follow the more descriptive requirements under the act which will have a domino effect on the AI supply chain. This is due to economic reality and the impracticalities of creating multiple AI products for EU and non-EU markets. 

Currently the UK has adopted a principle-based approach to regulating AI. However regulators are expected to issue rules for their specific industries, including the healthcare, digital health and life science sector. AI providers and adopters should prepare for the act entering into force and any UK-specific regulations, to ensure that they comply with the regulators’ principles and requirements moving forward. 

What are the key proposals of the EU Artificial Intelligence Act?

Below is a summary of some key proposals in the act, based on the current version.

Legal definition of AI

The act has updated the definition of AI systems largely to align with the definition used by the OECD

‘AI system’ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” (Article 3(1)). 

This definition has gone through various rounds of negotiation and compromise. It needs to strike a balance between providing a comprehensive definition that would encompass new developments, whilst also ensuring it was not so wide as to inadvertently capture software which would not generally be classified as ‘AI’. 

Applicability: 

The regulations will apply to AI systems that are placed on the market, put into service or used in the EU, irrespective of whether the provider of the system is based within the EU or outside of the EU.

Exclusions apply to AI systems developed or used exclusively for military purposes, public authorities, international organisations, and authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation. The act will provide certain guardrails for law enforcement authorities’ use of AI for law enforcement purposes, in order to ensure protection of individuals’ fundamental rights. 

Classification of AI systems based on the levels of risk: 

The act sets out a risk-based classification approach, with varying obligations based on the risks associated with each AI system.

Level of risk Description Regulation

Prohibited AI practices

(Article 5)

AI systems that:

  • deploy harmful ‘subliminal techniques’ or purposely manipulative or deceptive techniques.
  • exploit the vulnerabilities of a person or a specific groups including age, disability or social or economic situation.
  • categorise individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
  • evaluate or classify individuals for social scoring purposes which may lead to unjustified or disproportionate detrimental or unfavourable treatment of an individual or a specific group.
  • use ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.
  • assess or predict the risk of an individual to commit a criminal offence based on profiling or assessing their personal traits and characteristics.
  • create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
  • infer emotions of an individual in workplace and education institutions.

Prohibited to place on the market, put into services or use in EU AI systems, with little exception.

Examples of exceptions:

Real-time remote biometric identification systems would be allowed, subject to certain guardrails:

  • for targeted searches for specific victims of abduction, human trafficking or sexual exploitation as well as search for missing persons.
  • to prevent a specific, substantial and imminent threat to the life or physical safety or persons or of a genuine and present/genuine and foreseeable terrorist attack.
  • for a identification or localisation of a person suspected of a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences. 

AI systems that infer emotions may be allowed as an exception for medical or safety reasons.

High risk

(Article 6)

Two categories of high-risk AI systems(Article 6):

  • Systems that satisfy both of these conditions:
    • used as a safety component of a product, or the AI system is itself a product, covered by the EU harmonisation legislation listed at Annex II of the act (e.g. personal protective equipment, radio equipment, medical devices, vitro diagnostic medical devices)
    • the product whose safety component pursuant to point (i) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the union harmonisation legislation listed at Annex II of the act.
  • Systems deployed in 8 specific areas which could be updated through delegated acts:
    • Biometric identification (except for those used for biometric verification whose sole purpose is to confirm that a specific natural person is the person they claim to be), categorisation of natural persons or emotion recognition.
    • Management and operation of critical infrastructure.
    • Education and vocational training.
    • Employment, worker management and access to self-employment.
    • Access to and enjoyment of essential private services and public services and benefits. This includes healthcare services, or those evaluate and classify emergency calls for emergency responses including emergency healthcare patient triage systems.
    • Law enforcement
    • Migration, asylum and border control management.
    • Administration of justice and democratic processes.

AI systems at this risk level are permitted, but in order to gain access to the EU market, they are subject to rigorous testing, proper documentation of data quality and an accountability framework that details human oversight. For example:

  • Providers are required to register their systems in an EU-wide database managed by the Commission before placing them on the market or putting them into service.
  • Other requirements include risk management, testing, technical robustness, data training and data governance, transparency, human oversight and cybersecurity.
Others

Systems which are neither prohibited nor high risk, for example spam filters, phishing filters.

AI systems at this level of risk are permitted but are subject to transparency obligations.

General Purpose AI (GPAI)

The EU Artificial Intelligence Act also contains provisions to cover general purpose artificial intelligence (GPAI) models, which are defined as an:

‘AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications are AI systems that can be used for different purposes with varying degrees of risk’ (e.g. LLM and generative AI) (Articles 52a – 52e)

All GPAI models will be subject to horizontal obligations, such as providing technical documentation to the authorities and downstream providers for the purpose of compliance with the act. Models with systemic risks are subject to additional obligations, including performing model evaluation, making risk assessments, taking risk mitigation measures, ensuring an adequate level of cybersecurity protection and reporting serious incidents to the authorities.    

What are the consequences for refusing to comply?

Non-compliance with the act and its obligations may result in significant penalties:

  • Violations of prohibited AI practices: fines of up to €35 million or 7% of global annual turnover, whichever is higher.
  • Violations of certain obligations related to operators or notified bodies other than prohibited AI practices: fines of up to €15 million or, where the offender is a company, 3% global annual turnover, whichever is higher.
  • The supply of incorrect, incomplete or misleading information: fines of up to €7.5 million or, where the offender is a company, up to 1% of global annual turnover, whichever is higher.

For small and medium-sized enterprises and start-ups, the administrative fines will be the lower of the above amount and percentages.          

Violations by GPAI providers may attract a fines of up to €15 million or 3% total global turnover in the preceding financial year, whichever is higher.

What happens now that the European Parliament has approved the act?

The act is expected to be finally approved by the council of European Union in April 2024. It will then become effective 20 days following publication in the Official Journal later this year.

For more information, please contact our technology team.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up