The EU AI Act entered into force on 1 August 2024, with obligations rolling out in phases through 2027. For UK-based businesses, the instinctive reaction is often: “It’s an EU regulation — doesn’t apply to us.” That assumption is wrong. And it could be expensive.
The EU AI Act has significant extraterritorial reach. Much like GDPR — which the UK initially dismissed, then adopted almost verbatim — the AI Act will shape how UK companies build, deploy, and sell AI systems. Understanding exactly when and how it applies is not optional. It is a competitive necessity.
À retenir
- The EU AI Act applies to UK businesses that place AI systems on the EU market or whose AI outputs are used within the EU
- Article 2 establishes clear extraterritorial scope — location of the provider is irrelevant
- The UK has no equivalent comprehensive AI law; its sector-led approach leaves gaps
- UK businesses serving EU clients need to comply now — the first obligations took effect in August 2025
When the EU AI Act applies to UK businesses
Article 2 of the AI Act makes the jurisdictional rules explicit. The regulation applies to:
- Providers (developers) of AI systems that are placed on the market or put into service in the EU — regardless of where the provider is established
- Deployers (users) of AI systems that are located within the EU
- Providers and deployers located outside the EU, where the output produced by the AI system is used within the EU
That third category is the one that catches most UK businesses off guard. If your London-based fintech uses an AI credit scoring model and the output informs a lending decision for an EU customer, you are within scope. If your Edinburgh software company sells an AI recruitment tool to a German employer, you are a provider placing a system on the EU market.
£21.4 bn
value of UK digital services exports to the EU in 2024 — a significant share involves AI-enabled products
Source : ONS UK Trade Statistics 2024
The extraterritorial principle mirrors GDPR’s approach, and UK businesses are already familiar with that model. The key difference: the AI Act introduces risk-based obligations that go beyond data protection, covering system accuracy, transparency, human oversight, and competency requirements under Article 4.
The EU AI Act timeline that matters for UK firms
The regulation’s phased rollout means different obligations kick in at different times:
- February 2025: Prohibitions on unacceptable-risk AI practices (e.g., social scoring, emotion recognition in workplaces)
- August 2025: Article 4 competency obligations — all organisations using AI must ensure staff have adequate AI literacy
- August 2026: Requirements for high-risk AI systems (Annex III) and transparency obligations for general-purpose AI
- August 2027: Full requirements for high-risk AI systems embedded in EU-regulated products (Annex I)
For UK businesses already operating in the EU market, the August 2025 competency obligations are already in force. If your employees interact with AI systems that affect EU stakeholders, they need documented AI training.
“Wait and see” is not a viable compliance strategy. The ICO has confirmed it will cooperate with EU authorities on AI enforcement. UK businesses found non-compliant may face market access restrictions in addition to fines of up to €35 million or 7% of global turnover.
The UK’s own AI regulation approach
The UK government has deliberately chosen not to replicate the EU AI Act. Instead, the Department for Science, Innovation and Technology (DSIT) published a white paper in March 2023 establishing a “pro-innovation” framework built on five principles:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
These principles are not legally binding. They are enforced — or not — by existing sector regulators: the FCA for financial services, the ICO for data protection, the CMA for competition, Ofcom for communications.
0
binding AI-specific laws passed by the UK Parliament as of March 2026 — the sector-led approach remains voluntary
Source : DSIT AI Regulation Policy Paper, updated 2026
The practical consequence: UK businesses operating only domestically face lighter AI regulation today. But this creates a dual problem. First, the lack of clear rules does not mean the lack of liability — existing laws (Equality Act, GDPR UK, consumer protection) still apply to AI harms. Second, UK firms selling into the EU must comply with the AI Act regardless.
The FCA has been the most active UK regulator, issuing guidance on AI use in financial services including requirements around model governance, explainability, and consumer outcomes. The ICO has published detailed guidance on AI and data protection, particularly around automated decision-making under Article 22 of UK GDPR.
Practical steps for UK businesses
1. Map your EU exposure
Audit every AI system your organisation uses or provides. For each one, answer: does this system’s output reach an EU citizen, an EU-based client, or an EU market? If yes, the AI Act likely applies. This includes AI tools used by employees who serve EU clients — such as AI assistants that may operate as shadow AI without IT oversight.
2. Classify your AI systems by risk
The AI Act uses a four-tier risk classification: unacceptable, high, limited, and minimal risk. Most enterprise AI falls into the limited or high-risk categories. High-risk systems include those used in:
- Employment and worker management (recruitment, performance evaluation)
- Access to essential services (credit scoring, insurance)
- Law enforcement and border control
- Education and vocational training
If your AI system falls into a high-risk category and touches the EU market, you face the most demanding obligations: conformity assessments, risk management systems, data governance, technical documentation, human oversight, and post-market monitoring.
3. Invest in AI competency now
Article 4’s competency obligation applies from August 2025. It requires organisations to ensure staff have “a sufficient level of AI literacy” proportionate to their role and the context in which AI is used. For UK businesses in scope, this means structured training programmes — not a one-off webinar.
An ISO 42001-certified management system can provide a structured framework for AI governance that satisfies both EU and UK regulatory expectations.
Start with your highest-risk teams: those making decisions that affect EU individuals using AI tools. Document everything — training records, competency assessments, and policy acknowledgments. This documentation is your first line of defence in any regulatory inquiry.
4. Appoint a compliance owner
The AI Act requires providers outside the EU to appoint an EU authorised representative (Article 22). This person or entity acts as the regulatory point of contact. Even if your primary exposure is through outputs used in the EU rather than direct market placement, having a clear internal owner for AI compliance is essential.
5. Build for dual compliance
The most pragmatic approach for UK businesses is to build governance structures that satisfy both the AI Act and UK sector-specific requirements. This means:
- A risk-based AI policy aligned with both frameworks
- Training programmes that cover EU AI Act obligations and UK regulator guidance
- Documentation practices that work for EU conformity assessments and UK regulatory inquiries
- Incident reporting processes that meet EU timelines (72 hours for serious incidents)
What happens if you ignore the EU AI Act
The penalties are substantial. For violations of the prohibited AI practices, fines can reach €35 million or 7% of global annual turnover — whichever is higher. For other non-compliance, including Article 4 competency failures, fines can reach €15 million or 3% of turnover.
But fines are not the only risk. Non-compliant AI systems can be removed from the EU market entirely. For UK businesses that depend on EU revenue, losing market access is potentially more damaging than any fine.
The European AI Office, established in early 2024, is the central enforcement body. It has already begun coordinating with national authorities across all 27 member states — and it has signalled that extraterritorial enforcement will be a priority.
How Brain helps UK businesses prepare
Brain is a platform built to help organisations meet AI competency obligations — whether those stem from the EU AI Act, UK sector regulators, or internal governance standards. The platform provides role-specific AI training modules, competency tracking that serves as compliance documentation, and practical exercises that prepare teams for real-world AI use.
For UK businesses navigating dual regulatory environments, Brain provides a single solution that covers both EU Article 4 requirements and UK sector regulator expectations — with training available in multiple languages for international teams.
Explore Brain’s plans to find the right approach for your organisation, or start with a free assessment of your current AI competency gaps.