The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024, and its obligations are rolling out in phases through 2027. If you operate in the EU, sell to EU customers, or deploy AI systems whose outputs affect people in the EU — this law applies to your organisation.
But the EU AI Act is not just a European concern. Its extraterritorial reach, combined with the UK’s own evolving AI regulatory framework, means that virtually every international business needs to understand what it requires and when.
À retenir
- The EU AI Act classifies AI systems into four risk categories, each with different obligations
- Article 4 (AI literacy) is already in force since August 2025 — it requires all staff using AI to be trained
- The Act has extraterritorial scope: it applies to non-EU companies whose AI systems affect people in the EU
- Penalties reach up to 35 million euros or 7% of global annual turnover
Why the EU AI Act matters for your organisation
The EU AI Act follows the regulatory pattern established by GDPR: European legislation with global impact. Just as GDPR became the de facto global standard for data protection, the AI Act is setting the baseline for AI governance worldwide.
If you’re in the EU, compliance is mandatory. The obligations are legally binding and enforced by national authorities with the power to issue significant fines.
If you’re in the UK, the AI Act still matters. The UK’s own AI regulatory approach — currently principles-based, coordinated across existing regulators like the ICO, FCA, and CMA — is lighter-touch than the EU’s. But any UK business that serves EU customers, deploys AI systems in the EU, or whose AI outputs affect EU residents falls within the Act’s scope. The ICO has explicitly acknowledged that UK organisations should monitor the EU AI Act as part of their compliance planning.
If you’re global, the AI Act creates a compliance floor. Meeting its requirements positions you well for regulations emerging in other jurisdictions — Canada’s AIDA, Brazil’s AI framework, and the evolving US state-level AI laws.
85%
of senior executives say AI regulation will significantly impact their business strategy within 2 years
Source : PwC Global AI Survey 2025
The four risk categories explained
The EU AI Act takes a risk-based approach. The higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the obligations.
Unacceptable risk (banned)
Certain AI practices are prohibited outright. Since 2 February 2025, the following are illegal in the EU:
- Social scoring by public authorities — rating citizens based on behaviour or personal characteristics
- Subliminal manipulation — AI systems designed to distort behaviour in ways that cause harm, without a person’s awareness
- Real-time biometric identification in public spaces by law enforcement (with narrow exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorisation based on sensitive attributes (race, sexual orientation, political opinions)
- Predictive policing based solely on profiling
High risk
This is where most of the regulatory burden falls. AI systems are classified as high-risk when they are used in:
- Employment and recruitment — CV screening, candidate ranking, interview analysis, performance evaluation
- Education — student assessment, admissions decisions, exam proctoring
- Credit and insurance — creditworthiness assessment, risk scoring, pricing
- Essential services — access to healthcare, social benefits, emergency services
- Law enforcement — evidence evaluation, crime prediction, risk assessment
- Migration — visa processing, asylum claim assessment, border control
- Critical infrastructure — energy, water, transport, digital networks
High-risk systems must comply with extensive requirements: risk management systems, data governance, technical documentation, human oversight, accuracy and robustness standards, and conformity assessments. These obligations apply from 2 August 2026.
Many common business AI applications fall into the high-risk category without organisations realising it. If your AI system influences hiring decisions, evaluates employee performance, or assesses customer creditworthiness — it is almost certainly high-risk under the Act.
Limited risk
AI systems that interact with people or generate content carry transparency obligations:
- Chatbots must disclose that the user is interacting with an AI
- AI-generated content (text, images, audio, video) must be labelled as artificially generated
- Deepfakes must be clearly identified as such
- Emotion recognition systems must inform the person being analysed
Minimal risk
Most AI applications — spam filters, AI-assisted writing tools, recommendation engines, search algorithms — fall here. No specific compliance obligations apply, though the general AI literacy requirement (Article 4) still covers them.
Article 4: the obligation that’s already in force
While most of the EU AI Act’s obligations phase in over time, Article 4 has been in force since 2 August 2025. It requires all providers and deployers of AI systems to ensure that their staff have “a sufficient level of AI literacy.”
This means every organisation that uses AI tools — from ChatGPT to Copilot to industry-specific AI platforms — must ensure its employees understand:
- How the AI systems they use work (at an appropriate level)
- What the limitations are (hallucinations, bias, accuracy)
- How to interpret AI outputs correctly
- How to exercise appropriate human oversight
The level of literacy required must be proportionate to the person’s role, technical background, and the context in which they use AI. A software engineer fine-tuning a model needs different training than a sales representative using an AI email assistant — but both need training.
For a detailed analysis of Article 4, see our guide to AI Act Article 4 obligations. For organisations looking to understand how shadow AI complicates compliance, our guide to shadow AI covers the risks of unmanaged AI usage.
€15M
maximum fine for failing to comply with Article 4 — or 3% of global annual turnover, whichever is higher
Source : EU AI Act, Article 99
The compliance timeline
Understanding when each obligation kicks in is critical for planning:
| Date | What happens |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Banned AI practices become illegal |
| 2 August 2025 | Article 4 (AI literacy) applies + GPAI model rules |
| 2 August 2026 | High-risk AI system obligations apply |
| 2 August 2027 | Full application, including AI in regulated products |
The phased approach is deliberate — but it does not mean you can wait. Article 4 is already enforceable. Organisations that have not begun training their staff are already non-compliant.
How the EU AI Act interacts with other regulations
The AI Act does not exist in isolation. It operates alongside:
GDPR — Any AI system processing personal data must comply with both the AI Act and GDPR. The obligations are cumulative, not alternative. Fines can be imposed under both regulations for a single infringement.
The AI Liability Directive — Makes it easier for individuals to claim damages for harm caused by AI systems, with a presumption of causality and disclosure obligations.
Sector-specific regulations — Financial services (MiFID II, Solvency II, DORA), medical devices (MDR), and product safety (General Product Safety Regulation) each add layers of AI-relevant requirements.
UK framework — The UK is pursuing a principles-based, sector-led approach rather than a single horizontal AI law. The ICO regulates AI under data protection law, the FCA under financial services, and the CMA under competition law. However, the UK government’s 2025 AI regulation white paper signals a move towards binding obligations, particularly for frontier AI models.
The ISO 42001 standard for AI management systems provides a structured framework that maps well to EU AI Act requirements. Organisations pursuing ISO 42001 certification will find that much of the compliance groundwork is already covered. See our guide to ISO 42001 for details.
What to do now: a five-step action plan
1. Audit your AI usage. Identify every AI system in use across your organisation — including shadow AI. You cannot assess risk for systems you do not know about.
2. Classify by risk level. Map each AI system to the EU AI Act’s risk categories. Prioritise high-risk systems for immediate compliance work.
3. Train your people. Article 4 is in force. Every employee who uses AI needs appropriate training. Brain delivers AI literacy training designed specifically for EU AI Act compliance — practical modules tailored to each role, with documented completion records that serve as compliance evidence.
4. Build governance structures. Appoint AI governance responsibilities, create an AI register, establish approval processes for new AI tools, and implement incident reporting procedures.
5. Document everything. In any enforcement scenario, the quality of your documentation determines the outcome. Maintain records of risk assessments, training completion, governance decisions, and incident responses.
Start with Article 4 compliance. It is already enforceable, it applies to every organisation regardless of risk level, and it builds the foundational awareness your teams need to handle the more complex obligations coming in 2026 and 2027.
Test your knowledge of the EU AI Act
Get your organisation AI Act-ready with Brain
Brain is the AI training platform built for regulatory compliance. Practical, role-specific modules that cover AI literacy, data protection, hallucination detection, and responsible AI use — with a compliance dashboard that tracks completion and generates audit-ready reports.
Whether you need to meet Article 4 obligations or prepare for the high-risk system requirements coming in August 2026, Brain gets your teams ready. Check our plans to get started.