| This article was originally published on Tech South West’s website, as part of the Growth Forge campaign 2025. |
Ashfords is a partner for Growth Forge 2025, a business acceleration programme for ambitious tech companies. Learn more here.
Artificial Intelligence (AI) has moved beyond hype into widespread adoption across numerous sectors. Whether operating in the health, financial services, retail, manufacturing or technology sectors, AI is transforming how organisations operate. But as businesses embrace AI, legal and regulatory issues have become increasingly urgent, especially for UK companies that develop or deploy AI systems and engage with overseas markets.
This article explores the UK’s emerging approach to AI regulation, and highlights key cross-border compliance considerations for companies active in the EU and US. It provides a practical overview for in-house legal teams, compliance professionals, and business leaders seeking to manage AI-related risks and anticipate future obligations.
The UK government has taken a distinct approach to AI regulation, diverging from the more centralised and prescriptive model proposed in the EU. Rather than introducing a standalone AI Act, the UK government is currently opting for a contextual and decentralised framework through leveraging existing regulators, and tailoring oversight to different sectors.
In March 2023, the UK Department for Science, Innovation and Technology (DSIT) published its AI Regulation White Paper. This paper outlines five core principles that regulators should apply in their oversight of AI systems:
Importantly, these principles are not legally binding—yet. Instead, they serve as a foundation for guidance and any future regulatory development. The government’s stated aim is to balance innovation and investment with public trust and responsible development.
Rather than creating a new AI regulator, the UK is empowering existing regulatory bodies, such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA), Competition and Markets Authority (CMA), and Medicines and Healthcare products Regulatory Agency (MHRA), to develop sector-specific approaches.
For example:
This decentralised model is intended to provide regulatory flexibility, but it may also lead to uncertainty for businesses operating across, or selling into, multiple sectors.
In early 2024, DSIT signalled plans to introduce a statutory duty requiring regulators to have regard to the five principles. This may be included in future legislation, although no firm timeline has been announced. Additionally, a new “AI Risk Register” and central AI regulatory function are under development to improve coordination across sectors. The King’s Speech at the opening of Parliament last summer referenced the possibility of a new AI bill, but this has not yet been introduced by the government. The state of current AI regulation in the UK therefore remains as “wait and see.”
UK companies that market AI products or services in the EU must be aware of the EU AI Act, which will become fully applicable in 2026, following formal adoption in 2024. The AI Act introduces a horizontal legal framework for AI development, deployment and use based on a risk-tiered classification.
The AI Act categorises AI systems as:
High-risk AI systems must meet stringent requirements related to data governance, human, human oversight, cybersecurity, accuracy, and documentation.
The AI Act has extraterritorial effect, meaning that UK companies are subject to the regime set out under the AI Act if:
This creates significant compliance obligations for UK businesses, even those without a physical presence in the EU.
Fines under the AI Act can reach €35 million or 7% of global annual turnover, depending on the infringement. This matches the EU's broader trend of aggressive enforcement in digital regulation.
Unlike the EU, the US lacks a comprehensive AI law. Instead, AI is regulated through a patchwork of sector-specific rules and agency guidance.
Enforcement risk in the US is growing, particularly around false claims, algorithmic bias, and consumer deception. The FTC has already taken action against companies using "unsubstantiated" AI marketing claims, such as assertions that bias is eliminated from automated decision-making. For UK companies selling into the US, it will be essential to track sector-specific developments (especially in health, finance, and employment) and ensure marketing materials don't overstate capabilities.
With no global harmonisation on AI regulation, UK companies developing or deploying AI systems in the UK and beyond must take steps to prepare for a fragmented legal environment. Here are some key points to consider:
Misleading claims about AI capabilities can trigger regulatory scrutiny. Ensure product documentation, training data descriptions, and marketing materials are accurate, current and verifiable.
AI regulation is not static. In the UK, sector regulators will continue refining their expectations, and statutory duties may still be introduced. In the EU, the AI Act will require detailed implementing measures and sector-specific guidance. In the US, legislative activity at both state and federal level is accelerating.
UK firms operating internationally should treat AI compliance as a dynamic, ongoing and strategic business issue, not merely a legal one. Businesses that embed ethical design, transparency, and accountability into their AI lifecycle and deployment will be best positioned to navigate emerging rules and build sustainable trust.
The UK’s AI regulatory approach aims to be light-touch and innovation-friendly, but compliance obligations will intensify, especially for UK-based companies engaging with the EU and US. Now is the time for UK businesses to audit their use of AI systems, map potential risk areas, and engage with emerging standards. Legal and compliance teams should play a proactive role in shaping responsible AI governance from the outset.
If your organisation needs help assessing its AI regulatory exposure or building compliance strategies, Ashfords’ team can help. We work with businesses across a number of sectors to align AI development, deployment and use with legal risk management, innovation, and commercial goals.
We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.
Sign up