The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI regulation. It became law on 1 August 2024, and its obligations are phasing in through 2027. Whether you are headquartered in Berlin, London, or New York, if your AI systems affect people in the EU, this regulation applies to you.
This EU AI Act summary covers everything a business leader needs: the four risk tiers, what each tier demands, the compliance calendar, the penalties for non-compliance, and the one obligation — Article 4 AI literacy — that is already enforceable today.
À retenir
- The EU AI Act uses a four-tier, risk-based framework — from banned practices to minimal-risk systems with no specific obligations
- Article 4 (AI literacy) has been in force since August 2025 and applies to every organisation using AI, regardless of risk tier
- Penalties range from €7.5 million to €35 million, or 1% to 7% of global annual turnover
- The Act has extraterritorial reach — non-EU companies whose AI outputs affect EU residents must comply
Why the EU AI Act matters beyond Europe
The EU AI Act follows the same pattern as GDPR: a European regulation that becomes the global benchmark. Organisations that comply with the AI Act are well-positioned for emerging frameworks in other jurisdictions — the UK’s evolving AI regulatory approach, Canada’s AIDA, and the growing patchwork of US state-level AI laws.
For UK businesses specifically, the Act’s extraterritorial scope means compliance is not optional if you serve EU customers or if your AI systems produce outputs that affect EU residents. Our detailed guide on whether the EU AI Act applies to UK businesses covers the specifics.
For a broader look at what the regulation actually is, see our explainer on what the EU AI Act is.
The four risk categories: a tier-by-tier breakdown
The EU AI Act’s central mechanism is risk-based classification. Every AI system falls into one of four tiers, and each tier carries different obligations.
Tier 1: Unacceptable risk — outright banned
Since 2 February 2025, the following AI practices are illegal in the EU:
- Social scoring — rating citizens based on behaviour or personal characteristics
- Subliminal manipulation — AI designed to distort behaviour without a person’s awareness
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions for serious crime)
- Emotion recognition in workplaces and educational settings
- Biometric categorisation using sensitive attributes such as race, political opinion, or sexual orientation
- Predictive policing based solely on profiling or personality traits
There is no compliance pathway for these systems. They are simply prohibited.
Tier 2: High risk — the heaviest obligations
High-risk classification applies to AI systems used in areas where errors or bias could cause significant harm. This is where the majority of the regulatory burden falls.
Sectors and use cases classified as high-risk include:
- Employment — CV screening, candidate ranking, automated interview analysis, performance evaluation
- Education — admissions decisions, student assessment, exam proctoring
- Financial services — credit scoring, insurance risk assessment, fraud detection
- Essential public services — healthcare access, social benefits allocation, emergency response
- Law enforcement and migration — evidence evaluation, border control, asylum processing
- Critical infrastructure — energy, transport, water supply, digital networks
What high-risk systems must do (from 2 August 2026):
- Maintain a comprehensive risk management system
- Meet data governance and quality standards
- Produce detailed technical documentation
- Enable human oversight at all times
- Demonstrate accuracy, robustness, and cybersecurity
- Complete conformity assessments before deployment
- Register in the EU public database
Many organisations do not realise their AI systems qualify as high-risk. If you use AI for anything that influences hiring decisions, assesses creditworthiness, or evaluates employee performance, it almost certainly falls into this tier. An AI risk assessment is the essential first step.
Tier 3: Limited risk — transparency obligations
AI systems that interact directly with people or generate synthetic content must meet transparency requirements:
- Chatbots and virtual assistants must disclose that the user is communicating with an AI
- AI-generated content (text, images, audio, video) must be labelled as artificially generated or manipulated
- Deepfakes must carry clear identification
- Emotion recognition systems must inform the person being analysed
These obligations apply regardless of the system’s other risk classification.
Tier 4: Minimal risk — no specific obligations
The vast majority of AI applications — spam filters, recommendation engines, AI-assisted writing tools, search algorithms — fall here. No specific compliance obligations apply beyond the general Article 4 AI literacy requirement.
85%
of AI systems in business use are estimated to fall into the minimal or limited risk categories — but Article 4 AI literacy still applies to all of them
Source : European Commission Impact Assessment
Article 4: the AI literacy obligation already in force
Article 4 is the single most important provision for most organisations right now. It has been enforceable since 2 August 2025, and it applies universally — to every provider and deployer of AI systems, regardless of risk tier.
The requirement is straightforward: ensure that all staff and personnel who operate or oversee AI systems possess “a sufficient level of AI literacy,” taking into account their technical knowledge, experience, education, and the context in which the AI is used.
In practice, this means:
- Every employee who uses AI tools — from ChatGPT to Copilot to industry-specific platforms — needs documented training
- Training must be proportionate to role and responsibility (a data scientist needs different training than a sales manager)
- Organisations must be able to demonstrate compliance if audited
- The obligation covers both internal staff and contractors
For a deep dive into what Article 4 requires, see our guide to AI Act Article 4 obligations. Understanding AI literacy in practice is essential to building a compliant training programme.
€15M
maximum fine for non-compliance with Article 4 — or 3% of global annual turnover, whichever is higher
Source : EU AI Act, Article 99
The compliance timeline
The EU AI Act uses a phased rollout. Each date introduces new obligations:
| Date | Milestone |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Banned AI practices become illegal |
| 2 August 2025 | Article 4 (AI literacy) applies; GPAI model rules begin |
| 2 August 2026 | High-risk AI system obligations apply |
| 2 August 2027 | Full application, including AI embedded in regulated products |
The phased approach does not mean you can wait. Article 4 is already enforceable. Organisations that have not trained their staff are already non-compliant and exposed to penalties.
Penalties: what non-compliance costs
The EU AI Act establishes a three-tier penalty structure:
- Banned practices — up to €35 million or 7% of global annual turnover
- High-risk obligations — up to €15 million or 3% of global annual turnover
- Incorrect information to authorities — up to €7.5 million or 1% of global annual turnover
For SMEs and start-ups, penalties are capped at the lower of the two thresholds. But even reduced fines represent existential risk for smaller organisations.
National authorities in each EU member state will be responsible for enforcement, and the EU AI Office coordinates cross-border cases. The enforcement model mirrors GDPR — meaning organisations should expect both complaints-driven investigations and proactive audits.
The EU AI Act’s penalty structure is designed to be proportionate but dissuasive. Organisations with a documented AI governance framework and evidence of proactive compliance efforts are far better positioned in any enforcement action. Documentation is your best defence.
How the EU AI Act interacts with existing regulations
The AI Act does not replace existing law — it adds to it. Key intersections include:
- GDPR — AI systems processing personal data must comply with both frameworks. Fines can be imposed under both for a single infringement. See our guide on AI and GDPR compliance.
- ISO 42001 — The international standard for AI management systems maps closely to the EU AI Act’s requirements. Pursuing ISO 42001 certification builds a strong compliance foundation.
- NIST AI RMF — Organisations already following the NIST AI framework will find significant overlap with the Act’s risk management requirements.
- Sector-specific rules — Financial services (MiFID II, DORA), healthcare (MDR), and product safety regulations each add additional AI-relevant obligations.
What to do now: five priority actions
1. Map your AI landscape. Identify every AI system in use, including shadow AI — tools adopted by employees without IT approval. You cannot classify what you cannot see.
2. Classify each system by risk tier. Use the four categories above to determine which obligations apply. Conduct a formal AI risk assessment for anything that might be high-risk.
3. Address Article 4 immediately. This is enforceable now. Train every employee who uses AI, document completion, and maintain records. Brain delivers role-specific AI literacy training designed for EU AI Act compliance, with audit-ready completion tracking.
4. Build your governance framework. Establish an AI governance structure, create an AI register, define approval processes for new AI tools, and set up an AI policy that covers acceptable use.
5. Prepare for August 2026. If you have high-risk systems, begin conformity assessment preparation now. Technical documentation, risk management systems, and human oversight mechanisms take time to build properly.
Start with Article 4. It is the most immediate obligation, the lowest barrier to action, and it builds the organisational awareness your teams need to handle the more complex high-risk requirements arriving in 2026.
Test your EU AI Act knowledge
Get your teams AI Act-ready with Brain
Brain is the AI training platform built for EU AI Act compliance. Role-specific modules covering AI literacy, hallucination detection, data privacy, and responsible AI use — with a compliance dashboard that tracks completion and generates audit-ready reports.
Whether you need to meet Article 4 obligations today or prepare for the high-risk system requirements arriving in August 2026, Brain gets your organisation ready.
Related articles
EU AI Act News: April 2026 Updates + Enforcement Timeline
Latest EU AI Act updates — enforcement dates, GPAI Code of Practice, fines and what your business must do before August 2026.
EU AI Act Explained: A Business Leader's Guide (2026)
Understand the EU AI Act in plain English. Risk categories, timeline, obligations, and what it means for your organisation.
AI Governance Framework: Checklist + Template (ISO 42001)
Build an AI governance framework step by step. Includes checklist, template, EU AI Act alignment and ISO 42001 integration guide.