Key dates of the AI Act: prohibited practices (February 2025), training obligation (August 2025), high-risk systems (August 2026), full application (August 2027).
The AI Act does not apply all at once. The European legislator deliberately staggered obligations over three years to give organisations time to adapt. But this progressiveness cuts both ways: each deadline that passes becomes an immediate non-compliance risk.
Understanding this timeline is essential to plan actions, allocate budgets and mobilise the right teams at the right time.
Regulation (EU) 2024/1689 is published in the Official Journal and enters into force. At this stage, no concrete obligation applies to businesses yet. This is the grace period — the one in which well-prepared organisations get ahead.
What you should do: begin the inventory of your AI systems and appoint an AI Act compliance officer.
This is the first binding deadline. Four categories of AI systems are now prohibited:
Article 5 — Règlement (UE) 2024/1689
The following artificial intelligence practices shall be prohibited: […] the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a person due to their age, disability or social or economic situation.
What you should do: audit all your existing AI tools to verify that none falls into these categories. Pay particular attention to recruitment tools, customer scoring and surveillance tools.
This is the most underestimated deadline — and the most urgent for the majority of businesses.
Critical deadline — August 2025
Article 4 requires every organisation that deploys or uses an AI system to ensure a sufficient level of AI literacy for all relevant staff. This obligation takes effect on 2 August 2025. No size exemption is provided: SMEs are covered in the same way as large corporations.
Article 4 requires training proportionate to the role and exposure of each individual. This is not about a generic e-learning course: organisations must be able to demonstrate that the competencies acquired are suited to the context of use.
In parallel, this phase includes:
What you should do:
The regulation’s heaviest obligations take effect. Any AI system classified as “high-risk” (Annex III) must comply with a strict conformity framework:
The most affected sectors: HR and recruitment, credit and insurance, healthcare, education, justice, immigration, critical infrastructure. In the UK, this encompasses NHS AI diagnostic tools, FCA-regulated scoring systems at institutions such as Barclays and HSBC, and HMRC automated assessment programmes.
📄AI governance in business: 5 steps to get structured→What you should do:
All provisions of the regulation become applicable, including for AI systems already on the market before entry into force. Penalties are fully enforceable:
If you are reading this in 2026, the Article 4 deadline has already passed. The absolute priority is being able to demonstrate that your organisation has taken concrete measures to train its teams.
Here is the recommended order of priority:
The timeline is tight, but the legislator’s progressive approach is an asset for organisations that start now. Those that wait for the final deadline risk hitting a wall of compliance that is impossible to scale in a few months.
📄AI training for business: the 2026 strategic guide→