AI system classification: are you affected by the high-risk category?

The AI Act classifies AI systems into 4 risk levels. Recruitment, credit, health: find out whether your AI tools are considered high-risk.

The heart of the framework: a risk-based approach

The European Regulation on Artificial Intelligence (Regulation 2024/1689) rests on a fundamental principle: obligations are proportionate to the level of risk. The more an AI system can affect people’s lives, health, fundamental rights or safety, the stricter the requirements.

This graduated approach sets the AI Act apart from more binary regulations. It avoids stifling innovation in low-risk uses while firmly regulating the most sensitive applications.

Recital 26Règlement (UE) 2024/1689

AI systems presenting a high level of risk to the health, safety or fundamental rights of natural persons should be subject to strict requirements before being placed on the Union market.

The four risk levels

1. Unacceptable risk — Prohibited systems

Certain uses of AI are outright banned (Article 5). Social scoring, subliminal manipulation, exploitation of people’s vulnerabilities and real-time biometric identification in public spaces (with limited exceptions) have been prohibited since 1 February 2025.

2. High risk — Systems subject to reinforced obligations

This is the category affecting the greatest number of businesses. High-risk AI systems are defined in Articles 6 and 7 and listed in detail in Annex III of the regulation. The associated obligations are substantial: conformity assessment, technical documentation, human oversight, risk management, data governance.

3. Limited risk — Transparency obligations

Systems such as chatbots, content generators (deepfakes, text) or emotion-recognition systems not covered by the prohibition are subject to transparency obligations. Users must be informed that they are interacting with an AI or that content has been generated by an AI.

4. Minimal risk — No specific obligations

The vast majority of AI systems: spam filters, product recommendations, spell-checkers. The AI Act imposes no specific constraints, although compliance with Article 4 on AI literacy still applies to all deployers.

Annex III: high-risk systems, sector by sector

Annex III of the regulation lists the domains in which an AI system is considered high-risk. Here are the concrete cases relevant to most businesses.

Human resources and recruitment

This is the area that surprises the most organisations. Annex III, point 4 covers AI systems used in:

Annex III, point 4Règlement (UE) 2024/1689

Employment, management of workers and access to self-employment, in particular for the recruitment and selection of persons, for making decisions concerning the terms of the work-related relationship, promotion and termination, and for the allocation of tasks, monitoring or evaluation of persons in work-related contractual relationships.

Specifically, the following are considered high-risk:

  • Candidate scoring: any algorithm that automatically ranks, rates or filters applications
  • Automated CV analysis: tools that pre-select profiles based on automatic criteria
  • Pre-qualification chatbots: conversational assistants that evaluate candidates during the recruitment process
  • Performance evaluation systems: AI tools that analyse employee productivity or behaviour
  • Matching tools: algorithms that match profiles to positions

If your HR department uses an automated CV-sorting tool or a recruitment chatbot, it is likely a high-risk system within the meaning of the AI Act.

📄AI for HR: a complete guide to transforming HR functions

Finance and insurance

Annex III, point 5(b) covers AI systems used for:

  • Credit scoring: models that assess the creditworthiness of natural persons (with the exception of fraud detection)
  • Risk assessment in life and health insurance
  • Automated pricing based on individual profiles

Any bank or insurer using an AI model to decide whether to grant or refuse credit, or to set an insurance premium, is affected. In the UK, this includes FCA-regulated institutions such as Barclays and HSBC, whose AI-driven credit scoring and risk assessment tools fall squarely within this category when serving EU customers. Fraud detection is explicitly excluded from the high-risk category — but beware, scoring systems that go beyond simple detection may fall within it.

Healthcare

Annex III, point 1 covers AI systems that are medical devices or accessories to medical devices under European medical-device regulations. This includes:

  • Diagnostic support: algorithms that analyse medical images or propose diagnoses — including NHS AI tools used for radiology screening and pathology analysis
  • Automated triage: systems that direct patients towards care pathways, such as those deployed across NHS 111 services
  • Monitoring systems: AI-enabled connected devices that track vital parameters
  • Prescribing support: tools that suggest treatments

Education and vocational training

Annex III, point 3 covers AI systems used for:

  • Admissions to educational establishments
  • Automated marking of examinations and assessments
  • Assessment of the appropriate level of education for an individual
  • Monitoring for prohibited conduct during examinations (automated invigilation)

Justice and law enforcement

Annex III, points 6, 7 and 8 cover:

  • Recidivism risk assessment: systems used to evaluate the likelihood of a person re-offending
  • Polygraphs and emotion-detection tools during interrogations
  • Assessment of the reliability of evidence
  • Profiling in the context of crime prevention and detection

Critical infrastructure

Annex III, point 2 covers safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.

Migration and border control

Annex III, point 7 includes systems used for assessing the risk of irregular migration, examining visa and residence-permit applications, and identifying persons in the context of migration.

Concrete obligations for high-risk systems

If one or more of your AI systems is classified as high-risk, here is what the regulation requires:

Conformity assessment (Articles 9 to 15)

Before being placed on the market or put into service, the system must undergo a conformity assessment covering:

  1. Risk management system (Article 9): identification, analysis, estimation and mitigation of risks throughout the system’s lifecycle
  2. Data governance (Article 10): training datasets must be relevant, representative and as free from errors as possible
  3. Technical documentation (Article 11): a detailed description of the system, its purposes, performance and limitations
  4. Record-keeping (Article 12): automatic logging of events (logs) throughout the system’s operation
  5. Transparency and information (Article 13): clear instructions for deployers
  6. Human oversight (Article 14): the system must be designed to allow effective oversight by natural persons
  7. Accuracy, robustness and cybersecurity (Article 15)

Specific obligations for deployers (Article 26)

Businesses that use high-risk AI systems (deployers) have their own obligations:

  • Use the system in accordance with the provider’s instructions
  • Ensure human oversight by competent and trained individuals
  • Monitor the system’s operation and report any malfunctions
  • Carry out a fundamental-rights impact assessment (for certain deployers)
  • Retain automatically generated logs for at least six months
📄AI risk in business: how to identify and manage effectively

How to audit your AI tools

The first step towards compliance is knowing exactly which AI systems you use and in which category they fall.

1. Draw up a complete inventory

List every tool incorporating AI in your organisation. Do not forget:

  • Tools embedded in your existing software (CRM, ERP, HRIS)
  • SaaS subscriptions with AI features
  • Informal use by staff (AI assistants, generation tools)
  • Internal developments
  • HMRC automated decision-making systems or FCA-regulated scoring tools, if applicable

2. Classify each system

For each tool identified, determine:

  • Is it an AI system within the meaning of Article 3(1) of the regulation?
  • Does it fall within one of the categories in Annex III?
  • What is its risk level?

3. Assess the compliance gap

For each high-risk system, compare your current situation against the requirements of Articles 9 to 15 and Article 26. Identify the gaps and prioritise corrective actions.

4. Document and trace

Build a compliance file for each system, including its classification, technical documentation, assessments carried out and control measures in place. This structured documentation will be your reference in the event of an audit.

5. Train the users

For each high-risk system, those responsible for human oversight must have the necessary competencies (Article 14). This aligns with the Article 4 obligation on AI literacy, but with a higher standard: these individuals must understand the specific capabilities and limitations of the system they oversee.

The case of general-purpose AI systems

General-purpose AI models (such as GPT-4, Claude, Gemini) are subject to specific provisions (Articles 51 to 56). When a general-purpose model is integrated into an AI system classified as high-risk, the high-risk obligations apply to the system as a whole — even if the underlying model is, in itself, a generalist tool.

This means that using ChatGPT to respond to candidates in a recruitment process can turn a “generalist” tool into a component of a high-risk system.

Ce que ça implique pour vous

Classifying your AI systems is not a theoretical exercise — it is the cornerstone of your AI Act compliance. If you use AI in recruitment, performance evaluation, credit scoring or diagnostic support, you are most likely operating high-risk systems. Each affected system requires a conformity assessment, technical documentation, effective human oversight and trained personnel to ensure it. Start with the inventory. Today.