An update on AI and medical devices from the Medicines and Healthcare products Regulatory Agency

read time: 7 mins

The use of artificial intelligence (AI) in the medical arena is by no means new. AI has been used for years in the analysis and interpretation of medical data, to perform a diagnostic function and to monitor patient health. However, the growth of AI capabilities and a corresponding increase in its reliance, coupled with the sensitive and important role it plays, mean regulation of AI as a medical device (AIaMDs) is more important than ever. 

In the words of Dr Tedros Adhanom Ghebreyesus (WHO Director General): 

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but it can also be misused and cause harm.”

This article highlights the key takeaways from the response by the Medicines and Healthcare products Regulatory Agency’s (MHRA) to the government’s white paper on AI.

What does the MHRA’s response to the white paper cover?

In the UK, the MHRA is the independent regulator of medical devices. In April 2024 it published a report on the ‘Impact of AI on the regulation of medical products’ (MHRA Report), in response to the Secretary of State’s letter of 1 February 2024 and preceding white paper on delivering a pro-innovation approach to AI regulation. 

The white paper proposed that, instead of implementing a stand-alone piece of AI legislation that would apply across all sectors, individual regulators would be tasked with regulating the use of AI in their specific fields, ensuring that in doing so they are implementing five key principles. 

These five principles are as follows:


High level summary

Principle 1: safety, security and robustness AI systems should function in a robust, secure and safe way through the AI life cycle, and risks should be continually identified, assessed and managed.
Principle 2: appropriate transparency and explainability AI systems should be appropriately transparent and explainable.
Principle 3: fairness AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes.
Principle 4: accountability and governance Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
Principle 5: contestability and redress Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. 

The MHRA implementation of the five principles 

The MHRA approach AI regulation from three different perspectives:

  • As a regulator
  • As a public service organisation delivering time-critical decisions
  • As an organisation that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties.

As a regulator of AI products 

The Medical Devices Regulations 2002 (MDR) govern medical devices in Great Britain. In the MHRA Report, the MHRA noted that where AI is used for medical purposes, it is very likely to come within the definition of a medical device and therefore must meet the requirements of the MDR to be placed on the UK market. 

These regulations cover the full lifecycle of a product, from the initial clinical investigation pre-market, through to post-market surveillance. The regulations include clear responsibilities and accountabilities. These existing legal frameworks will continue to apply to AIaMD products.

However, the report notes that the use of AI has developed considerably since the 2002 regulations, which have consequently been supplemented by guidance including the Medical Devices: software applications (apps) Guidance. Thus, the MHRA is looking to update its guidance, policies and regulations to reflect those changes.

Principle 1

The paper acknowledges the importance of international alignment to businesses operating in a global setting. Although the UK has left the EU, the current MDR risk-based classification system will continue to apply in the updated regulations with a view to up-classify certain AI products so that they will go through more stringent scrutiny and provide better protection to users and patients. 

The MHRA is currently undertaking a programme of regulatory reform for medical devices including the Software and AI as a Medical Devices Change Programme, keeping in mind the need for a proportionate approach which uses the five principles supplemented by guidance to “avoid constraining innovation, where this can be done safely”. This will include clear guidance on cyber security due for publication in spring 2025. 

Principle 2

The MRHA will add a requirement for manufacturers to provide users with information on how the device works, and a statement of the purpose of the device. The MHRA has provided further guidance to support manufacturers on this. Specific guidance on human factors for AIaMD products is expected to be released in spring 2025.

Principle 3

A critical area for the MHRA is that they are committed to ensuring equitable access to safe, effective and high quality medical devices for all individuals. The MHRA looks to take an international aligned position when implementing this position, that is encouraging manufacturers to refer to the relevant international standards such as the ISO/IEC TR 24027:2021 or the IMDRF guidance document N65. 

The MHRA is also collaborating internationally with other stakeholders in the industry to provide recommendations for diversity, inclusivity and generalisability in AI. 

Principle 4

The current regulatory obligations of manufacturers, conformity assessment bodies and the MHRA will be improved by new regulations which are expected to cover, among other things:

  • Obligations of other economic operators in the supply chain.
  • Accountability and governance of datasets used in the creation of AI models and post-market changes.

Principle 5

The implementation of this principle will be supported by new measures to provide full lifecycle management of AIaMD products, including monitoring product changes, under new regulations. The current reporting scheme continues to be used for anyone to report concerns about a medicine or device including those incorporate AI elements. 

As a public service organisation delivering time-critical decisions

The MHRA is in the early stage of implementing AI in their regulatory services, aligning with the key government strategies including the National AI Strategy and NHS National Strategy for AI in Health and Social Care. For example, they are exploring the use of using supervised machine learning to do an initial assessment to provide a score or recommendation on applications for medicines licences. 

The report has made it clear that security, including cybersecurity, is high on the MHRA’s agenda when implementing principle 1 for AI tools and the relevant data. This includes a focus on well-characterised training and validation datasets across all use cases. They are also developing an MHRA data strategy, which will include a theme on safely and responsibly applying advanced analytics and AI within the business, including on the role of large language models and generative AI across the business.

The MHRA has updated the 12-month technology roadmap with emphasis on 3 strategic themes - innovation, eradication of legacy and cybersecurity. They will also publish a refreshed plan for information governance and data protection policies as well as for new ways of working. 

As an organisation that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties

The MHRA acknowledges that AI will increasingly feature in how the sector operates and provides regulatory evidence to the MHRA. Since 2020 they have engaged the pharmaceutical industry in their use of AI for vigilance purpose and to optimise their own system using AI. Whilst the MHRA continue to rely on the existing legal framework for pharmacovigilance activities and quality management system, they are also collaborating with international regulatory bodies and pharmaceutical industry to develop best practice in use of AI. 

While the fundamental purpose of ensuring a product is safe and how the MHRA regulates is unchanged, the MHRA acknowledges that the pace of product development may change so the MHRA will ensure that their regulatory pathways are sufficiently agile and robust to respond to those changes.


On 9 May 2024 the MHRA launched a pilot regulatory sandbox for software and AI medical devices called the ‘AI-Airlock’. This is part of the resources allocated to the MHRA’s regulatory reform programme. This will seek to identify and address the unique challenges of AI as a medical devices, working with key partners including the UK approved bodies, the NHS and other regulators. 

Initially the project will seek out and support four to six virtual or real-world AIaMD projects through simulation to test a range of regulatory issues. This will provide a safe environment for manufacturers to test their product viability before going to market. 

If you would like any advice on the regulation of medical devices please contact Ian Manners. For advice on technology including AI please contact Brett Lambe.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up