AI Act penalties: fines of up to EUR 35 million

AI Act penalties can reach EUR 35 million or 7% of worldwide turnover. Breakdown of fines by type of infringement and comparison with the GDPR.

An unprecedented penalty regime

The European Regulation on Artificial Intelligence (Regulation 2024/1689) introduces one of the most stringent penalty systems in European digital law. With fines of up to EUR 35 million or 7% of worldwide turnover, the AI Act sends a clear signal: non-compliance on AI is not an abstract risk — it is a major financial risk.

Article 99Règlement (UE) 2024/1689

Member States shall lay down the rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures, applicable to infringements of this Regulation by operators, and shall take all measures necessary to ensure that they are implemented properly and effectively […].

The three tiers of fines

Article 99 of the regulation defines three tiers of penalties, depending on the severity of the infringement.

Tier 1 — Prohibited practices: EUR 35 million or 7% of worldwide turnover

The highest level of penalty applies to violations of Article 5 (prohibited practices):

  • Social scoring
  • Subliminal manipulation or deceptive techniques
  • Exploitation of people’s vulnerabilities
  • Real-time biometric identification in public spaces (except for permitted exceptions)
  • Emotion recognition in the workplace and in education
  • Building facial databases through scraping

The fine can reach EUR 35 million or 7% of annual worldwide turnover, whichever is higher.

For a business with EUR 500 million in turnover, this could mean a fine of EUR 35 million. For a global group with EUR 10 billion in turnover, the theoretical fine rises to EUR 700 million.

Tier 2 — Non-compliance with high-risk systems: EUR 15 million or 3% of worldwide turnover

The second tier covers violations of obligations relating to high-risk AI systems and general-purpose AI models:

  • Failure to comply with risk management requirements (Article 9)
  • Deficient data governance (Article 10)
  • Absence of technical documentation (Article 11)
  • Insufficient transparency towards users (Article 13)
  • Inadequate human oversight (Article 14)
  • Non-compliance of general-purpose AI models (Articles 51 to 56)
  • Failure to comply with the AI literacy obligation (Article 4)

This last point is crucial: failure to train staff in AI falls under this tier. A business that has taken no measures to ensure a sufficient level of AI literacy for its personnel faces fines of up to EUR 15 million or 3% of worldwide turnover.

Tier 3 — Incorrect information: EUR 7.5 million or 1.5% of worldwide turnover

The third tier applies when a business provides inaccurate, incomplete or misleading information to competent authorities or notified bodies:

  • False declarations of conformity
  • Falsified or incomplete technical documents
  • Inaccurate responses to requests for information from authorities
  • Concealment of incidents or malfunctions

The fine can reach EUR 7.5 million or 1.5% of annual worldwide turnover.

Comparison with the GDPR: heavier fines

To appreciate the severity of the AI Act regime, the comparison with the GDPR is instructive:

CriterionGDPRAI Act
Maximum fine (fixed amount)EUR 20 millionEUR 35 million
Maximum fine (% of turnover)4% of worldwide turnover7% of worldwide turnover
Number of tiers23
In force sinceMay 2018Progressive (2025-2027)
Supervisory authorityData protection authoritiesNational authorities + European AI Office

The European legislator deliberately set AI Act penalties above those of the GDPR. The message is clear: risks associated with poorly managed AI are considered at least as serious as those related to data protection — and businesses will be penalised accordingly.

As a reminder, GDPR penalties have not remained theoretical. In 2023, Meta received a EUR 1.2 billion fine from the Irish authority. Amazon was fined EUR 746 million in Luxembourg in 2021. The ICO in the United Kingdom has also levied significant fines under the UK GDPR. European authorities have shown that they do not hesitate to apply maximum penalties to major corporations.

Special regimes for SMEs

The regulation provides for adapted treatment for small businesses. Article 99(6) specifies that fines must be “effective, proportionate and dissuasive”. For SMEs and start-ups, authorities must take account of the business’s economic viability.

In practice:

  • The turnover percentages apply in the same way, but the fixed amounts (EUR 35 million, EUR 15 million, EUR 7.5 million) serve as an absolute ceiling
  • Authorities must consider the size of the business, the gravity of the infringement, whether it was intentional, and the measures taken to mitigate damage
  • Regulatory sandboxes (Article 57) offer a framework in which SMEs can test their AI systems under real conditions with support from the authorities

Who inspects and who sanctions?

National competent authorities

Each Member State must designate one or more national competent authorities responsible for market surveillance and enforcement of the regulation.

Across the EU:

  • Data protection authorities (such as the CNIL in France) — already competent on personal data, with a natural expertise in AI systems processing personal data
  • Sector-specific regulators (financial regulators for banking, health authorities for healthcare) — for high-risk AI systems in their respective domains
  • Dedicated AI authorities or new bodies — for general AI model oversight

In the United Kingdom, although the AI Act does not apply domestically, UK organisations serving the EU market must comply. Enforcement coordination domestically involves the ICO (Information Commissioner’s Office) for data-related AI matters, the FCA (Financial Conduct Authority) for AI in financial services, the MHRA for AI medical devices, and the UK AI Safety Institute for frontier model oversight. HMRC automated decision-making systems are also under increasing regulatory scrutiny.

The European AI Office

Established within the European Commission, the European AI Office plays a coordinating role and has direct powers over general-purpose AI models. It can:

  • Assess the compliance of general-purpose AI models
  • Request corrective measures from providers
  • Impose fines for violations relating to general-purpose models
📄AI Act Article 4: the AI training obligation explained

The importance of proving compliance

Beyond fines, the AI Act mechanism rests on a fundamental principle: the burden of proof lies with the business. It is not for the authority to demonstrate that you are non-compliant — it is for you to demonstrate that you are compliant.

What “proving compliance” means in practice

  1. Technical documentation: for each high-risk AI system, a complete dossier describing the system, its purposes, its performance, its limitations and the training data used
  2. Compliance records: history of conformity assessments, internal audits and updates
  3. Usage logs: automated logs tracking decisions made by AI systems (minimum retention of six months)
  4. Training evidence: certificates, assessment results, participation rates — demonstrating that staff have achieved the level of literacy required by Article 4
  5. Reporting procedures: documented mechanisms allowing users to report malfunctions or problematic outputs

A measurable and traceable literacy score for each staff member constitutes particularly strong evidence for Article 4 compliance. Similarly, a complete record of completed training, with dates, content and results, allows you to prove compliance objectively.

The aggravating factor: no measures at all

In the event of an inspection, the most unfavourable situation is not having an imperfect compliance programme — it is having none at all. Authorities will take account of the efforts made by the business. Having initiated a structured approach, even an incomplete one, is always preferable to total inaction.

📄AI audit in your organisation: a practical step-by-step guide

The risk timeline

Penalties apply progressively, in line with the entry into force of the various obligations:

DateObligations in forceApplicable penalties
1 February 2025Prohibited practices (Article 5)Up to EUR 35 million / 7%
2 August 2025AI literacy (Article 4) + General-purpose modelsUp to EUR 15 million / 3%
2 August 2026High-risk AI systemsUp to EUR 15 million / 3%
2 August 2027High-risk systems integrated into regulated productsUp to EUR 15 million / 3%

The first two deadlines have already passed. Businesses that have not yet taken action on prohibited practices and AI literacy are already in the risk zone.

Beyond fines: indirect risks

Financial penalties are only the visible part. Non-compliance with the AI Act exposes businesses to other equally significant risks:

  • Reputational risk: being publicly identified as non-compliant with AI regulation sends a disastrous signal to customers, partners and investors
  • Commercial risk: compliant businesses will require their suppliers and partners to be compliant too — a cascade effect comparable to that of the GDPR
  • Operational risk: authorities can order the withdrawal from the market of a non-compliant AI system or prohibit its use — potentially paralysing critical business processes
  • Litigation risk: individuals affected by a non-compliant AI system may bring legal proceedings, with the associated legal and compensation costs

Ce que ça implique pour vous

The AI Act introduces the heaviest penalties in European digital law: EUR 35 million or 7% of worldwide turnover for prohibited practices, EUR 15 million for high-risk systems and failure to train staff. The key to minimising risk: prove compliance through rigorous documentation, tracked training and measurable assessments. The first deadlines have already passed — every day without action increases exposure.