Article 4: the obligation to train all your staff in AI

The AI Act requires businesses to ensure an adequate level of AI literacy for all staff. Who is affected, what evidence to provide, what penalties apply.

What Article 4 actually says

Article 4 of the European Regulation on Artificial Intelligence (Regulation 2024/1689, known as the “AI Act”) is often wrongly summarised as a simple recommendation. In reality it is a binding legal obligation, applicable since 2 August 2025.

Article 4Règlement (UE) 2024/1689

Providers and deployers of AI systems shall take measures to ensure, to the best extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, as well as the context in which the AI systems are intended to be used, and taking into account the persons or the groups of persons on whom the AI systems are intended to be used.

This text is dense. Let us break it down.

Who is affected?

Article 4 targets two categories of actors:

Providers

Any entity that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark. This includes software publishers, AI start-ups and businesses that develop internal AI-based tools.

Deployers

Any natural or legal person that uses an AI system under its own authority. In practice, this includes any business that uses AI tools in its day-to-day operations — from ChatGPT to an automated scoring system.

This is the crucial point: you do not need to develop AI to be affected. If your teams use AI tools — even third-party tools such as conversational assistants, content-generation tools or automated analysis systems — you are a deployer within the meaning of the regulation.

In practice, this means that virtually every European business is affected. UK organisations such as Barclays, HSBC and NHS trusts that deploy AI tools in customer-facing or clinical settings should also assess their exposure, as the AI Act applies whenever their systems operate within the EU market.

What does “sufficient level of AI literacy” mean?

The regulation does not prescribe a standardised training programme. It requires a sufficient level of AI literacy, defined in Article 3(56) as:

Article 3, paragraph 56Règlement (UE) 2024/1689

The skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness of the opportunities and risks of AI and possible harm it can cause.

In concrete terms, assessing the competency level of your staff means verifying that they understand:

  • What an AI system is and how it works, at a level suited to their role
  • The associated risks of using AI in their professional context
  • Good practices: verifying outputs, protecting data, reporting anomalies
  • The limitations of the systems they use on a daily basis

The expected level is not uniform. Article 4 specifies that training must take account of each person’s “technical knowledge, experience, education and training”, as well as the “context in which the AI systems are intended to be used”. An AI developer and a customer service representative will not have the same upskilling needs.

The central question: how to prove compliance?

The regulation requires that “measures” be taken to ensure this level of literacy. In the event of an inspection or dispute, the business will need to demonstrate that it has actually acted. This is where the concept of documented evidence becomes essential.

What authorities will expect

Although the precise audit procedures have not yet been fully defined by the competent national authorities — such as the ICO and the UK AI Safety Institute in the United Kingdom, or their counterparts across EU Member States — several items of evidence are emerging as indispensable:

  1. An inventory of AI systems in use within the organisation, classified by risk level
  2. A training plan tailored to the different staff profiles
  3. Evidence of participation in training (registers, certificates, logs)
  4. An assessment of competency level before and after training
  5. Regular follow-up showing that training is not a one-off event but an ongoing process

Recital 20 of the regulation provides further guidance:

Recital 20Règlement (UE) 2024/1689

AI literacy measures […] should be designed and, where appropriate, tailored to the context in which the AI systems are used. […] Providers and deployers may also ensure that technical staff have the necessary training and skills.

In other words, a generic training course delivered once will not suffice. The regulation expects a contextualised, measurable and continuous approach.

📄AI Act Article 4: the AI training obligation explained

The timeline: it is already in force

Unlike other provisions of the AI Act that apply progressively, Article 4 has been applicable since 2 August 2025. The full timeline:

DateProvision
1 February 2025Prohibition of banned AI practices (Article 5)
2 August 2025Article 4 — AI literacy obligation
2 August 2025Obligations for general-purpose AI models
2 August 2026Obligations for high-risk AI systems

Businesses that have not yet launched an initiative to train all their staff are therefore technically already behind schedule.

How to structure your compliance programme

Step 1: Map AI usage

Before training anyone, you need to know what is being used. Carry out an exhaustive inventory of all AI systems deployed in your organisation. Include official tools, but also informal usage (the notorious “shadow IT” of AI).

Step 2: Identify the affected populations

Not all staff are exposed in the same way. Segment by:

  • Level of interaction with AI systems (daily, occasional, indirect user)
  • Risk level of the systems used (scoring, recruitment, customer service — for example, FCA-regulated credit scoring or HMRC automated assessment systems)
  • Prior technical knowledge

Step 3: Define a competency framework

Clearly establish what each profile needs to know. This framework will serve as the basis for assessing the sufficient level of AI literacy required by the regulation.

Step 4: Train and assess

Deploy tailored training pathways and measure the results. An individual or team literacy score allows you to track progress and demonstrate the compliance effort.

Step 5: Document continuously

Build a compliance file including: training programmes, participation rates, assessment results and regular updates. This documented evidence will be your strongest ally in the event of an inspection.

📄AI audit in your organisation: a practical step-by-step guide

Penalties for non-compliance

Article 4 falls under the intermediate tier of penalties provided for by the AI Act. Non-compliance can result in fines of up to EUR 15 million or 3% of annual worldwide turnover (whichever is higher).

But beyond fines, the reputational risk is considerable. Failing to have trained your teams in AI when the regulation requires it is a signal of organisational immaturity that neither clients, nor regulators, nor partners will tolerate for long.

What sets Article 4 apart from other obligations

Article 4 is unique in the European regulatory landscape for several reasons:

  • It concerns people, not systems. Most AI Act obligations relate to the technical characteristics of AI systems. Article 4, by contrast, concerns the competencies of the human beings who use them.
  • It is cross-cutting. Whatever the risk level of your AI systems, the literacy obligation applies.
  • It is already in force. No additional transition period.

It is also the most accessible provision for starting your AI Act compliance journey. Before you even classify your systems or document your models, you can — and must — begin with upskilling your teams.

Ce que ça implique pour vous

Article 4 makes training all staff a legal obligation, not an option. Businesses must be able to demonstrate, with supporting evidence, that their personnel have achieved a sufficient level of AI literacy — through measurable assessments, rigorous documentation and continuous follow-up. The obligation has been in force since 2 August 2025. Every day without action is a day of non-compliance.