When the EU AI Act entered into force on 1 August 2024, most organisations treated it as a distant concern — a regulation to worry about later. That was a mistake. As of March 2026, two major compliance deadlines have already passed, a third is five months away, and enforcement machinery is spinning up across all 27 member states. If you have not started preparing, you are already behind.
This article tracks the EU AI Act’s latest news, explains what each milestone means in practice, and sets out what your organisation should be doing right now.
À retenir
- Prohibited AI practices have been illegal since 2 February 2025 — organisations using social scoring, subliminal manipulation, or workplace emotion recognition must have stopped
- Article 4 (AI literacy) has been enforceable since 2 August 2025 — every organisation using AI must train its staff
- The GPAI Code of Practice was published in August 2025, setting rules for foundation model providers
- High-risk AI system obligations take effect on 2 August 2026 — five months away
- Full enforcement, including AI embedded in regulated products, begins 2 August 2027
Timeline: every major EU AI Act milestone so far
The EU AI Act uses a phased rollout. Some deadlines have passed; others are approaching fast. Here is the complete picture as of March 2026:
| Date | Milestone | Status |
|---|---|---|
| 1 August 2024 | AI Act enters into force | Done |
| 2 February 2025 | Prohibited AI practices become illegal | In force |
| 2 February 2025 | AI Office established, national authorities designated | In force |
| 2 August 2025 | Article 4 (AI literacy) applies to all organisations | In force |
| 2 August 2025 | General-purpose AI (GPAI) model obligations apply | In force |
| 2 August 2025 | GPAI Code of Practice published | In force |
| 2 August 2026 | High-risk AI system obligations apply | 5 months away |
| 2 August 2027 | Full application, including AI in regulated products | 17 months away |
The critical takeaway: three of the five major deadlines have already passed. If your organisation uses AI in any form, you already have compliance obligations.
3 of 5
major EU AI Act deadlines have already passed — including mandatory AI literacy training for all staff
Source : EU AI Act enforcement timeline
What has already changed: the rules now in force
Prohibited AI practices (since 2 February 2025)
The EU AI Act’s outright bans were the first provisions to take effect. Since February 2025, the following AI uses are illegal across the EU:
- Social scoring by public authorities
- Subliminal manipulation designed to distort behaviour and cause harm
- Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorisation based on sensitive attributes such as race, political opinions, or sexual orientation
- Predictive policing based solely on profiling
For most businesses, the highest-impact ban is workplace emotion recognition. If your organisation deployed AI tools that analyse employee facial expressions, voice tone, or physiological signals — for productivity monitoring, interview assessment, or wellbeing tracking — those systems became illegal on 2 February 2025. For a full breakdown of what the EU AI Act requires, see our complete guide.
Article 4: AI literacy (since 2 August 2025)
Article 4 is the obligation with the broadest reach. It requires every organisation that provides or deploys AI systems to ensure staff have “a sufficient level of AI literacy.” This is not optional and it is not limited to high-risk systems. If your employees use ChatGPT, Copilot, Gemini, or any AI tool, Article 4 applies.
What “sufficient AI literacy” means in practice:
- Staff understand how the AI systems they use work
- They recognise limitations — hallucinations, bias, accuracy gaps
- They can interpret AI outputs critically
- They exercise appropriate human oversight
The required level depends on the person’s role and context. A data scientist needs deeper technical knowledge than a marketing coordinator — but both need documented training. For a detailed analysis, see our guide to Article 4 obligations.
Article 4 carries fines of up to 15 million euros or 3% of global annual turnover. Critically, there is no grace period — the obligation has been enforceable since August 2025. Organisations without documented AI training programmes are already exposed.
GPAI Code of Practice (August 2025)
The General-Purpose AI Code of Practice, published alongside the Article 4 deadline, sets out rules for providers of foundation models — the companies building large language models, image generators, and multimodal systems. Key provisions include:
- Transparency obligations: providers must publish detailed summaries of training data, including copyrighted material
- Systemic risk assessment: models with significant capabilities must undergo systematic evaluation for risks including cybersecurity, bias, and misuse
- Technical documentation: comprehensive documentation of model architecture, training methodology, and known limitations
- Copyright compliance: clear policies on how copyrighted training data was used, with opt-out mechanisms for rights holders
If your organisation develops or fine-tunes AI models, these rules apply directly. If you deploy third-party models, you benefit from the upstream transparency but must still meet your own deployer obligations.
What is coming next: August 2026 and beyond
High-risk AI system obligations (2 August 2026)
The most substantial compliance burden arrives in five months. AI systems classified as high-risk — those used in employment, education, credit scoring, essential services, law enforcement, and critical infrastructure — must meet extensive requirements:
- Risk management systems — documented, ongoing risk identification and mitigation
- Data governance — training data quality, relevance, and bias assessment
- Technical documentation — detailed records of system design, purpose, and performance
- Human oversight — mechanisms allowing humans to monitor and override AI decisions
- Accuracy and robustness — validated performance under expected and stress conditions
- Conformity assessment — formal evaluation before placing the system on the market
Many organisations underestimate how many of their AI tools qualify as high-risk. If your organisation uses AI to screen CVs, rank job candidates, evaluate employee performance, assess loan applications, or triage customer service requests by severity — those are likely high-risk systems.
€35M
maximum fine for non-compliance with high-risk AI system obligations — or 7% of global annual turnover, whichever is higher
Source : EU AI Act, Article 99
Full enforcement (2 August 2027)
The final phase extends the AI Act’s requirements to AI systems embedded in regulated products — medical devices, vehicles, machinery, toys, and other products already covered by EU safety legislation. Manufacturers will need to integrate AI Act compliance into existing product conformity processes.
What businesses need to do now
If you have not started your EU AI Act compliance journey, here is a priority-ordered action plan:
1. Train your people immediately. Article 4 is enforceable today. Every employee who interacts with AI needs documented training — proportionate to their role, covering how AI works, its limitations, and responsible use. This is the fastest compliance win and the area where enforcement will come first. Brain delivers AI literacy training designed specifically for Article 4 compliance, with role-specific modules and audit-ready completion records.
2. Audit your AI landscape. You cannot comply with rules if you do not know what AI you are using. Conduct a thorough inventory of every AI system — including shadow AI that employees adopted without IT approval. Map each system to the Act’s risk categories.
3. Prepare for high-risk obligations. If your audit reveals high-risk AI systems, you have until August 2026. Start risk management documentation, data governance processes, and human oversight mechanisms now. Five months is not generous for complex compliance work.
4. Establish AI governance. Create clear accountability — who approves new AI tools, who monitors compliance, who handles incidents. An AI governance framework is essential for sustainable compliance. Consider pursuing ISO 42001 certification, which maps closely to EU AI Act requirements.
5. Build your compliance documentation. In any enforcement scenario, regulators will ask for evidence. Maintain records of training completion, risk assessments, governance decisions, and incident responses. If it is not documented, it did not happen.
Do not wait for enforcement actions to start. Regulators across the EU are building their AI oversight teams now, and Article 4’s broad scope makes it the easiest provision to enforce. The organisations that invested in AI literacy training in 2025 are already in a stronger position.
A note for UK organisations
The EU AI Act applies extraterritorially. If your UK-based organisation deploys AI systems in the EU, serves EU customers, or produces AI outputs that affect EU residents, you are within scope. The UK’s own AI regulatory approach is evolving in parallel, with increasing signals towards binding obligations for frontier models. Preparing for the EU AI Act now positions you well for whatever the UK framework becomes. For detail on how the Act applies to UK businesses, see our guide on whether the EU AI Act applies in the UK.
Test your knowledge of EU AI Act developments
Stay compliant with Brain
Brain is the AI training platform built for EU AI Act compliance. Role-specific, practical modules covering AI literacy, data privacy, hallucination detection, and responsible AI use — with a compliance dashboard that tracks completion and generates audit-ready reports.
Article 4 is already in force. High-risk obligations land in August. The time to act is now.
Related articles
EU AI Act Explained: A Business Leader's Guide (2026)
Understand the EU AI Act in plain English. Risk categories, timeline, obligations, and what it means for your organisation.
EU AI Act Summary: Risk Tiers, Deadlines + Penalties (2026)
EU AI Act in plain English — 4 risk categories, obligations by tier, compliance deadlines, penalties up to €35M, and Article 4 literacy rules.
AI Governance Framework: Checklist + Template (ISO 42001)
Build an AI governance framework step by step. Includes checklist, template, EU AI Act alignment and ISO 42001 integration guide.