Navigating AI regulation in the UK and beyond

read time: 7 mins read time: 7 mins
24.10.25 24.10.25
This article was originally published on Tech South West’s website, as part of the Growth Forge campaign 2025.

Ashfords is a partner for Growth Forge 2025, a business acceleration programme for ambitious tech companies. Learn more here.

Artificial Intelligence (AI) has moved beyond hype into widespread adoption across numerous sectors. Whether operating in the health, financial services, retail, manufacturing or technology sectors, AI is transforming how organisations operate. But as businesses embrace AI, legal and regulatory issues have become increasingly urgent, especially for UK companies that develop or deploy AI systems and engage with overseas markets.

This article explores the UK’s emerging approach to AI regulation, and highlights key cross-border compliance considerations for companies active in the EU and US. It provides a practical overview for in-house legal teams, compliance professionals, and business leaders seeking to manage AI-related risks and anticipate future obligations.

1. The UK Approach: Pro-Innovation, Sector-Led, and Risk-Based

The UK government has taken a distinct approach to AI regulation, diverging from the more centralised and prescriptive model proposed in the EU. Rather than introducing a standalone AI Act, the UK government is currently opting for a contextual and decentralised framework through leveraging existing regulators, and tailoring oversight to different sectors.

The 2023 White Paper: Principles and Priorities

In March 2023, the UK Department for Science, Innovation and Technology (DSIT) published its AI Regulation White Paper. This paper outlines five core principles that regulators should apply in their oversight of AI systems:

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Importantly, these principles are not legally binding—yet. Instead, they serve as a foundation for guidance and any future regulatory development. The government’s stated aim is to balance innovation and investment with public trust and responsible development.

Decentralised Oversight: Role of Existing Regulators

Rather than creating a new AI regulator, the UK is empowering existing regulatory bodies, such as the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA), Competition and Markets Authority (CMA), and Medicines and Healthcare products Regulatory Agency (MHRA), to develop sector-specific approaches.

For example:

  • The ICO has issued guidance on AI and data protection, including expectations around fairness, purpose limitation, and data minimisation.
  • The CMA has launched an AI Foundation Models Review, focusing on potential anti-competitive outcomes.
  • The FCA is examining AI in financial services as part of its broader innovation agenda.

This decentralised model is intended to provide regulatory flexibility, but it may also lead to uncertainty for businesses operating across, or selling into, multiple sectors.

Next steps: a statutory duty on regulators?

In early 2024, DSIT signalled plans to introduce a statutory duty requiring regulators to have regard to the five principles. This may be included in future legislation, although no firm timeline has been announced. Additionally, a new “AI Risk Register” and central AI regulatory function are under development to improve coordination across sectors. The King’s Speech at the opening of Parliament last summer referenced the possibility of a new AI bill, but this has not yet been introduced by the government. The state of current AI regulation in the UK therefore remains as “wait and see.” 

2. AI in the EU: The AI Act and its extraterritorial reach

UK companies that market AI products or services in the EU must be aware of the EU AI Act, which will become fully applicable in 2026, following formal adoption in 2024. The AI Act introduces a horizontal legal framework for AI development, deployment and use based on a risk-tiered classification. 

Risk based categories

The AI Act categorises AI systems as:

  • Unacceptable risk - banned (e.g., social scoring systems or use of biometrics in publis spaces for law enforcement)
  • High-risk - heavily regulated (e.g., AI in recruitment, credit scoring, biometric ID)
  • Limited risk - subject to transparency obligations (e.g., chatbots)
  • Minimal risk - largely unregulated

High-risk AI systems must meet stringent requirements related to data governance, human, human oversight, cybersecurity, accuracy, and documentation.

Who is Caught?

The AI Act has extraterritorial effect, meaning that UK companies are subject to the regime set out under the AI Act if:

  • They place an AI system on the EU market;
  • Their AI system is used within the EU;
  • They produce output used in the EU, even if the system itself operates elsewhere.

This creates significant compliance obligations for UK businesses, even those without a physical presence in the EU.

Penalties

Fines under the AI Act can reach €35 million or 7% of global annual turnover, depending on the infringement. This matches the EU's broader trend of aggressive enforcement in digital regulation.

3. Deploying AI in the US: Emerging risks and regulation

Unlike the EU, the US lacks a comprehensive AI law. Instead, AI is regulated through a patchwork of sector-specific rules and agency guidance.

Key Trends and Initiatives

  • The Federal Trade Commission (FTC) has taken the lead in regulating AI where it intersects with consumer protection, advertising, and data privacy.
  • In October 2023, President Biden issued an Executive Order on Safe, Secure, and Trustworthy AI directing agencies to develop safeguards, promote standards, and prevent discrimination.
  • The NIST AI Risk Management Framework (AI RMF) provides voluntary guidance for trustworthy AI development and deployment.

Enforcement risk in the US is growing, particularly around false claims, algorithmic bias, and consumer deception. The FTC has already taken action against companies using "unsubstantiated" AI marketing claims, such as assertions that bias is eliminated from automated decision-making. For UK companies selling into the US, it will be essential to track sector-specific developments (especially in health, finance, and employment) and ensure marketing materials don't overstate capabilities.

4. Practical Considerations for UK Businesses

With no global harmonisation on AI regulation, UK companies developing or deploying AI systems in the UK and beyond must take steps to prepare for a fragmented legal environment. Here are some key points to consider:

(i) Know Your Markets

  • If you’re selling AI products or services in the EU or US, understand how your system is categorised (e.g., high-risk under the AI Act, or subject to FTC scrutiny).
  • Establish where decisions are made, where data is processed, and where outputs are used, as these factors influence jurisdictional reach.

(ii) Build Governance Early

  • Even where AI regulation is still evolving, given the risks inherent with AI, regulators are already expecting best practices around transparency, fairness, human oversight, and accountability.
  • Develop an internal AI governance framework now – this will reduce future compliance burdens and build trust with customers and regulators (and do let Ashfords know if we can assist with this).

(iii) Align with Voluntary Standards

  • Aligning with frameworks like ISO/IEC 42001, NIST AI RMF, and the UK’s five principles can help demonstrate responsible AI innovation.
  • This is especially useful when dealing with procurement requirements, investor due diligence, or partnerships with regulated entities.

(iv) Transparency in Marketing and Use

Misleading claims about AI capabilities can trigger regulatory scrutiny. Ensure product documentation, training data descriptions, and marketing materials are accurate, current and verifiable.

5. Looking Ahead: Regulation Will Continue to Evolve

AI regulation is not static. In the UK, sector regulators will continue refining their expectations, and statutory duties may still be introduced. In the EU, the AI Act will require detailed implementing measures and sector-specific guidance. In the US, legislative activity at both state and federal level is accelerating.

UK firms operating internationally should treat AI compliance as a dynamic, ongoing and strategic business issue, not merely a legal one. Businesses that embed ethical design, transparency, and accountability into their AI lifecycle and deployment will be best positioned to navigate emerging rules and build sustainable trust.

Conclusion

The UK’s AI regulatory approach aims to be light-touch and innovation-friendly, but compliance obligations will intensify, especially for UK-based companies engaging with the EU and US. Now is the time for UK businesses to audit their use of AI systems, map potential risk areas, and engage with emerging standards. Legal and compliance teams should play a proactive role in shaping responsible AI governance from the outset.

If your organisation needs help assessing its AI regulatory exposure or building compliance strategies, Ashfords’ team can help. We work with businesses across a number of sectors to align AI development, deployment and use with legal risk management, innovation, and commercial goals.

Sign up for legal insights

We produce a range of insights and publications to help keep our clients up-to-date with legal and sector developments.  

Sign up