The AI Act requires businesses to ensure an adequate level of AI literacy for all staff. Who is affected, what evidence to provide, what penalties apply.
Article 4 of the European Regulation on Artificial Intelligence (Regulation 2024/1689, known as the “AI Act”) is often wrongly summarised as a simple recommendation. In reality it is a binding legal obligation, applicable since 2 August 2025.
Article 4 — Règlement (UE) 2024/1689
Providers and deployers of AI systems shall take measures to ensure, to the best extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, as well as the context in which the AI systems are intended to be used, and taking into account the persons or the groups of persons on whom the AI systems are intended to be used.
This text is dense. Let us break it down.
Article 4 targets two categories of actors:
Any entity that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark. This includes software publishers, AI start-ups and businesses that develop internal AI-based tools.
Any natural or legal person that uses an AI system under its own authority. In practice, this includes any business that uses AI tools in its day-to-day operations — from ChatGPT to an automated scoring system.
This is the crucial point: you do not need to develop AI to be affected. If your teams use AI tools — even third-party tools such as conversational assistants, content-generation tools or automated analysis systems — you are a deployer within the meaning of the regulation.
In practice, this means that virtually every European business is affected. UK organisations such as Barclays, HSBC and NHS trusts that deploy AI tools in customer-facing or clinical settings should also assess their exposure, as the AI Act applies whenever their systems operate within the EU market.
The regulation does not prescribe a standardised training programme. It requires a sufficient level of AI literacy, defined in Article 3(56) as:
Article 3, paragraph 56 — Règlement (UE) 2024/1689
The skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness of the opportunities and risks of AI and possible harm it can cause.
In concrete terms, assessing the competency level of your staff means verifying that they understand:
The expected level is not uniform. Article 4 specifies that training must take account of each person’s “technical knowledge, experience, education and training”, as well as the “context in which the AI systems are intended to be used”. An AI developer and a customer service representative will not have the same upskilling needs.
The regulation requires that “measures” be taken to ensure this level of literacy. In the event of an inspection or dispute, the business will need to demonstrate that it has actually acted. This is where the concept of documented evidence becomes essential.
Although the precise audit procedures have not yet been fully defined by the competent national authorities — such as the ICO and the UK AI Safety Institute in the United Kingdom, or their counterparts across EU Member States — several items of evidence are emerging as indispensable:
Recital 20 of the regulation provides further guidance:
Recital 20 — Règlement (UE) 2024/1689
AI literacy measures […] should be designed and, where appropriate, tailored to the context in which the AI systems are used. […] Providers and deployers may also ensure that technical staff have the necessary training and skills.
In other words, a generic training course delivered once will not suffice. The regulation expects a contextualised, measurable and continuous approach.
📄AI Act Article 4: the AI training obligation explained→Unlike other provisions of the AI Act that apply progressively, Article 4 has been applicable since 2 August 2025. The full timeline:
| Date | Provision |
|---|---|
| 1 February 2025 | Prohibition of banned AI practices (Article 5) |
| 2 August 2025 | Article 4 — AI literacy obligation |
| 2 August 2025 | Obligations for general-purpose AI models |
| 2 August 2026 | Obligations for high-risk AI systems |
Businesses that have not yet launched an initiative to train all their staff are therefore technically already behind schedule.
Before training anyone, you need to know what is being used. Carry out an exhaustive inventory of all AI systems deployed in your organisation. Include official tools, but also informal usage (the notorious “shadow IT” of AI).
Not all staff are exposed in the same way. Segment by:
Clearly establish what each profile needs to know. This framework will serve as the basis for assessing the sufficient level of AI literacy required by the regulation.
Deploy tailored training pathways and measure the results. An individual or team literacy score allows you to track progress and demonstrate the compliance effort.
Build a compliance file including: training programmes, participation rates, assessment results and regular updates. This documented evidence will be your strongest ally in the event of an inspection.
📄AI audit in your organisation: a practical step-by-step guide→Article 4 falls under the intermediate tier of penalties provided for by the AI Act. Non-compliance can result in fines of up to EUR 15 million or 3% of annual worldwide turnover (whichever is higher).
But beyond fines, the reputational risk is considerable. Failing to have trained your teams in AI when the regulation requires it is a signal of organisational immaturity that neither clients, nor regulators, nor partners will tolerate for long.
Article 4 is unique in the European regulatory landscape for several reasons:
It is also the most accessible provision for starting your AI Act compliance journey. Before you even classify your systems or document your models, you can — and must — begin with upskilling your teams.
Ce que ça implique pour vous
Article 4 makes training all staff a legal obligation, not an option. Businesses must be able to demonstrate, with supporting evidence, that their personnel have achieved a sufficient level of AI literacy — through measurable assessments, rigorous documentation and continuous follow-up. The obligation has been in force since 2 August 2025. Every day without action is a day of non-compliance.