AI Act timeline: every deadline from 2024 to 2027

Key dates of the AI Act: prohibited practices (February 2025), training obligation (August 2025), high-risk systems (August 2026), full application (August 2027).

Progressive compliance, not a big bang

The AI Act does not apply all at once. The European legislator deliberately staggered obligations over three years to give organisations time to adapt. But this progressiveness cuts both ways: each deadline that passes becomes an immediate non-compliance risk.

Understanding this timeline is essential to plan actions, allocate budgets and mobilise the right teams at the right time.

Each phase in detail

Phase 1 — 1 August 2024: entry into force

Regulation (EU) 2024/1689 is published in the Official Journal and enters into force. At this stage, no concrete obligation applies to businesses yet. This is the grace period — the one in which well-prepared organisations get ahead.

What you should do: begin the inventory of your AI systems and appoint an AI Act compliance officer.

Phase 2 — 2 February 2025: prohibited practices

This is the first binding deadline. Four categories of AI systems are now prohibited:

  • Social scoring: any system that ranks people on the basis of their social behaviour
  • Subliminal manipulation: techniques that influence decisions without the person being aware
  • Exploitation of vulnerabilities: systems targeting people in a position of weakness (age, disability, economic situation)
  • Emotion recognition in the workplace and in schools: except for strictly framed medical or safety cases

Article 5Règlement (UE) 2024/1689

The following artificial intelligence practices shall be prohibited: […] the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a person due to their age, disability or social or economic situation.

What you should do: audit all your existing AI tools to verify that none falls into these categories. Pay particular attention to recruitment tools, customer scoring and surveillance tools.

Phase 3 — 2 August 2025: governance and the training obligation

This is the most underestimated deadline — and the most urgent for the majority of businesses.

Critical deadline — August 2025

Article 4 requires every organisation that deploys or uses an AI system to ensure a sufficient level of AI literacy for all relevant staff. This obligation takes effect on 2 August 2025. No size exemption is provided: SMEs are covered in the same way as large corporations.

Article 4 requires training proportionate to the role and exposure of each individual. This is not about a generic e-learning course: organisations must be able to demonstrate that the competencies acquired are suited to the context of use.

In parallel, this phase includes:

  • The establishment of national supervisory authorities in each Member State — in the United Kingdom, the ICO and the UK AI Safety Institute play key coordinating roles alongside sector-specific regulators such as the FCA and MHRA
  • The publication of codes of practice for general-purpose AI (foundation models such as GPT, Gemini, Claude)
📄AI Act Article 4: the AI training obligation explained

What you should do:

  • Deploy an AI upskilling programme covering all exposed staff
  • Document the training completed and the levels achieved (auditable evidence)
  • Establish AI governance with clear roles and responsibilities

Phase 4 — 2 August 2026: high-risk systems

The regulation’s heaviest obligations take effect. Any AI system classified as “high-risk” (Annex III) must comply with a strict conformity framework:

  • Documented conformity assessment
  • Risk management system
  • Training data governance
  • Complete technical documentation
  • Human oversight built into the system’s operation
  • Transparency obligations towards users

The most affected sectors: HR and recruitment, credit and insurance, healthcare, education, justice, immigration, critical infrastructure. In the UK, this encompasses NHS AI diagnostic tools, FCA-regulated scoring systems at institutions such as Barclays and HSBC, and HMRC automated assessment programmes.

📄AI governance in business: 5 steps to get structured

What you should do:

  • Finalise the classification of all your AI systems by risk level
  • For each high-risk system, compile the required technical dossier
  • Integrate human-oversight mechanisms into your processes

Phase 5 — 2 August 2027: full application

All provisions of the regulation become applicable, including for AI systems already on the market before entry into force. Penalties are fully enforceable:

  • Up to EUR 35 million or 7% of worldwide turnover for prohibited practices
  • Up to EUR 15 million or 3% of turnover for other infringements
  • Up to EUR 7.5 million or 1.5% of turnover for inaccurate declarations
📄AI regulation in Europe: the complete guide 2026

Where to start now

If you are reading this in 2026, the Article 4 deadline has already passed. The absolute priority is being able to demonstrate that your organisation has taken concrete measures to train its teams.

Here is the recommended order of priority:

  1. Immediate — Verify the absence of prohibited practices (already in force)
  2. Immediate — Put in place a documented AI training programme (Article 4, in force since August 2025)
  3. Within 6 months — Classify your AI systems and prepare for high-risk compliance (August 2026)
  4. Within 18 months — Finalise full compliance (August 2027)

The timeline is tight, but the legislator’s progressive approach is an asset for organisations that start now. Those that wait for the final deadline risk hitting a wall of compliance that is impossible to scale in a few months.

📄AI training for business: the 2026 strategic guide