Prohibited AI practices: what your business can no longer do

The AI Act bans certain uses of AI from February 2025. Social scoring, manipulation, emotion recognition: the full picture.

The red lines of artificial intelligence

Before even addressing high-risk systems, transparency obligations or AI literacy, the European Regulation on Artificial Intelligence (Regulation 2024/1689) draws absolute red lines. Article 5 lists the AI practices that are outright prohibited in the European Union.

These prohibitions have been in force since 1 February 2025. These are not future deadlines: they apply now.

Article 5, paragraph 1Règlement (UE) 2024/1689

The following artificial intelligence practices shall be prohibited: […] the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques […] or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behaviour of a person […] in a manner that causes or is reasonably likely to cause that person or another person significant harm.

The six categories of prohibited practices

1. Social scoring

The AI Act prohibits systems that evaluate or classify natural persons on the basis of their social behaviour or personal characteristics, where that score leads to detrimental treatment in contexts unrelated to the data collection, or disproportionate to the behaviour in question.

Important: contrary to common belief, this prohibition does not only target governments. It also applies to private actors. A business that built a “reliability score” for its customers by aggregating payment data, online behaviour and customer-service interactions — and used that score to refuse services — would be in breach.

Concrete examples of practices now illegal:

  • An insurer aggregating social-media data to assess a customer’s “behavioural risk profile”
  • A landlord using a score combining payment history, online activity and geolocation data to filter tenants
  • An employer building an automated “engagement score” that conditions access to benefits
  • A UK bank such as Barclays or HSBC aggregating behavioural data from multiple contexts to build a customer “reliability score” used to restrict services

2. Subliminal manipulation and deceptive techniques

Any AI system designed to alter a person’s behaviour without their awareness is prohibited. This covers subliminal techniques (stimuli imperceptible to conscious awareness) and deliberately manipulative or deceptive techniques.

Examples:

  • A system that adapts an e-commerce interface to exploit cognitive biases identified by AI (AI-powered dark patterns)
  • An automated negotiation tool that analyses micro-expressions in real time to manipulate the other party
  • A dynamic-pricing system that exploits the detected emotional state of the user

3. Exploitation of vulnerabilities

AI systems that exploit vulnerabilities related to age, disability or social or economic situation to materially distort a person’s behaviour are prohibited.

Examples:

  • A commercial chatbot specifically targeting elderly people with persuasion techniques adapted to their cognitive vulnerabilities
  • A targeted advertising system aimed at people in financial distress to offer them consumer credit
  • An online game using AI to identify and exploit addictive behaviours in minors

4. Real-time biometric identification in public spaces

The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited, with three strictly circumscribed exceptions:

  • The targeted search for victims of kidnapping, trafficking in human beings or sexual exploitation
  • The prevention of a specific and imminent terrorist threat
  • The location or identification of a person suspected of having committed certain serious offences

Even in these cases, use requires prior judicial authorisation and is subject to strict safeguards.

5. Emotion recognition in the workplace and in education

Article 5, paragraph 1, point fRèglement (UE) 2024/1689

[Prohibited is] the placing on the market, the putting into service or the use of AI systems to infer emotions of a natural person in the areas of the workplace and education, except where the use of the AI system is intended to be put in place or placed on the market for medical or safety reasons.

This prohibition is particularly relevant for businesses. It covers:

  • Emotion-analysis tools in video conferencing: systems that analyse participants’ facial expressions during meetings
  • Emotional-surveillance systems: cameras or software that detect stress, frustration or disengagement among employees
  • Emotion analysis in training: tools that measure learners’ emotional state to adapt content

The only exceptions: uses for medical purposes (e.g. detecting pain in non-communicative patients) or safety purposes (e.g. detecting fatigue in professional drivers).

6. Building facial databases through scraping

The AI Act prohibits the creation or expansion of facial-recognition databases through the untargeted collection of images from the internet or CCTV footage. This is a direct response to the practices of companies such as Clearview AI, which had built a database of several billion faces by scraping public photos.

📄AI risk in business: how to identify and manage effectively

How to check whether your business is in breach

Most businesses instinctively think these prohibitions do not concern them. But certain practices, particularly in HR and marketing, can come dangerously close.

Verification checklist

Ask yourself these questions:

  • Scoring and classification: Do you assign automated scores to individuals (customers, employees, candidates) that combine data from different contexts?
  • Persuasion and personalisation: Do your recommendation or marketing systems exploit identified vulnerabilities (age, financial situation, emotional state)?
  • Emotional surveillance: Do you use tools that analyse the facial expressions, voice or behaviour of your staff for purposes other than medical or safety?
  • Biometrics: Do you use facial recognition on your premises? If so, under what conditions?
  • Facial data: Have your AI suppliers built their training databases by scraping images from the internet?

If the answer to any of these questions is “yes” or “possibly”, a thorough analysis is required.

The role of training

Staff awareness of these prohibitions is essential. The AI literacy obligation (Article 4) necessarily includes understanding what is prohibited. A staff member who does not know that using emotion recognition in a professional context is illegal cannot report non-compliant use.

Training your teams on these prohibitions is not optional — it is a precondition for any responsible AI governance. In the UK, the ICO and the UK AI Safety Institute have reinforced this message, emphasising that organisations must equip their workforce to identify and report non-compliant AI uses.

📄AI Act Article 4: the AI training obligation explained

Penalties

Prohibited practices are subject to the highest tier of penalties under the AI Act: up to EUR 35 million or 7% of annual worldwide turnover, whichever is higher. These are the heaviest fines ever set by European digital regulation — higher than those under the GDPR.

Ce que ça implique pour vous

The prohibitions in Article 5 have been in force since 1 February 2025. Social scoring, AI-driven manipulation, emotion recognition in the workplace: these practices are now illegal, with penalties of up to EUR 35 million. The challenge for businesses is twofold: verify immediately that no AI tool in use crosses these red lines, and train your teams to recognise prohibited uses before an incident occurs.