The AI Act classifies AI systems into 4 risk levels. Recruitment, credit, health: find out whether your AI tools are considered high-risk.
The European Regulation on Artificial Intelligence (Regulation 2024/1689) rests on a fundamental principle: obligations are proportionate to the level of risk. The more an AI system can affect people’s lives, health, fundamental rights or safety, the stricter the requirements.
This graduated approach sets the AI Act apart from more binary regulations. It avoids stifling innovation in low-risk uses while firmly regulating the most sensitive applications.
Recital 26 — Règlement (UE) 2024/1689
AI systems presenting a high level of risk to the health, safety or fundamental rights of natural persons should be subject to strict requirements before being placed on the Union market.
Certain uses of AI are outright banned (Article 5). Social scoring, subliminal manipulation, exploitation of people’s vulnerabilities and real-time biometric identification in public spaces (with limited exceptions) have been prohibited since 1 February 2025.
This is the category affecting the greatest number of businesses. High-risk AI systems are defined in Articles 6 and 7 and listed in detail in Annex III of the regulation. The associated obligations are substantial: conformity assessment, technical documentation, human oversight, risk management, data governance.
Systems such as chatbots, content generators (deepfakes, text) or emotion-recognition systems not covered by the prohibition are subject to transparency obligations. Users must be informed that they are interacting with an AI or that content has been generated by an AI.
The vast majority of AI systems: spam filters, product recommendations, spell-checkers. The AI Act imposes no specific constraints, although compliance with Article 4 on AI literacy still applies to all deployers.
Annex III of the regulation lists the domains in which an AI system is considered high-risk. Here are the concrete cases relevant to most businesses.
This is the area that surprises the most organisations. Annex III, point 4 covers AI systems used in:
Annex III, point 4 — Règlement (UE) 2024/1689
Employment, management of workers and access to self-employment, in particular for the recruitment and selection of persons, for making decisions concerning the terms of the work-related relationship, promotion and termination, and for the allocation of tasks, monitoring or evaluation of persons in work-related contractual relationships.
Specifically, the following are considered high-risk:
If your HR department uses an automated CV-sorting tool or a recruitment chatbot, it is likely a high-risk system within the meaning of the AI Act.
📄AI for HR: a complete guide to transforming HR functions→Annex III, point 5(b) covers AI systems used for:
Any bank or insurer using an AI model to decide whether to grant or refuse credit, or to set an insurance premium, is affected. In the UK, this includes FCA-regulated institutions such as Barclays and HSBC, whose AI-driven credit scoring and risk assessment tools fall squarely within this category when serving EU customers. Fraud detection is explicitly excluded from the high-risk category — but beware, scoring systems that go beyond simple detection may fall within it.
Annex III, point 1 covers AI systems that are medical devices or accessories to medical devices under European medical-device regulations. This includes:
Annex III, point 3 covers AI systems used for:
Annex III, points 6, 7 and 8 cover:
Annex III, point 2 covers safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
Annex III, point 7 includes systems used for assessing the risk of irregular migration, examining visa and residence-permit applications, and identifying persons in the context of migration.
If one or more of your AI systems is classified as high-risk, here is what the regulation requires:
Before being placed on the market or put into service, the system must undergo a conformity assessment covering:
Businesses that use high-risk AI systems (deployers) have their own obligations:
The first step towards compliance is knowing exactly which AI systems you use and in which category they fall.
List every tool incorporating AI in your organisation. Do not forget:
For each tool identified, determine:
For each high-risk system, compare your current situation against the requirements of Articles 9 to 15 and Article 26. Identify the gaps and prioritise corrective actions.
Build a compliance file for each system, including its classification, technical documentation, assessments carried out and control measures in place. This structured documentation will be your reference in the event of an audit.
For each high-risk system, those responsible for human oversight must have the necessary competencies (Article 14). This aligns with the Article 4 obligation on AI literacy, but with a higher standard: these individuals must understand the specific capabilities and limitations of the system they oversee.
General-purpose AI models (such as GPT-4, Claude, Gemini) are subject to specific provisions (Articles 51 to 56). When a general-purpose model is integrated into an AI system classified as high-risk, the high-risk obligations apply to the system as a whole — even if the underlying model is, in itself, a generalist tool.
This means that using ChatGPT to respond to candidates in a recruitment process can turn a “generalist” tool into a component of a high-risk system.
Ce que ça implique pour vous
Classifying your AI systems is not a theoretical exercise — it is the cornerstone of your AI Act compliance. If you use AI in recruitment, performance evaluation, credit scoring or diagnostic support, you are most likely operating high-risk systems. Each affected system requires a conformity assessment, technical documentation, effective human oversight and trained personnel to ensure it. Start with the inventory. Today.