The AI Act bans certain uses of AI from February 2025. Social scoring, manipulation, emotion recognition: the full picture.
Before even addressing high-risk systems, transparency obligations or AI literacy, the European Regulation on Artificial Intelligence (Regulation 2024/1689) draws absolute red lines. Article 5 lists the AI practices that are outright prohibited in the European Union.
These prohibitions have been in force since 1 February 2025. These are not future deadlines: they apply now.
Article 5, paragraph 1 — Règlement (UE) 2024/1689
The following artificial intelligence practices shall be prohibited: […] the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques […] or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behaviour of a person […] in a manner that causes or is reasonably likely to cause that person or another person significant harm.
The AI Act prohibits systems that evaluate or classify natural persons on the basis of their social behaviour or personal characteristics, where that score leads to detrimental treatment in contexts unrelated to the data collection, or disproportionate to the behaviour in question.
Important: contrary to common belief, this prohibition does not only target governments. It also applies to private actors. A business that built a “reliability score” for its customers by aggregating payment data, online behaviour and customer-service interactions — and used that score to refuse services — would be in breach.
Concrete examples of practices now illegal:
Any AI system designed to alter a person’s behaviour without their awareness is prohibited. This covers subliminal techniques (stimuli imperceptible to conscious awareness) and deliberately manipulative or deceptive techniques.
Examples:
AI systems that exploit vulnerabilities related to age, disability or social or economic situation to materially distort a person’s behaviour are prohibited.
Examples:
The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited, with three strictly circumscribed exceptions:
Even in these cases, use requires prior judicial authorisation and is subject to strict safeguards.
Article 5, paragraph 1, point f — Règlement (UE) 2024/1689
[Prohibited is] the placing on the market, the putting into service or the use of AI systems to infer emotions of a natural person in the areas of the workplace and education, except where the use of the AI system is intended to be put in place or placed on the market for medical or safety reasons.
This prohibition is particularly relevant for businesses. It covers:
The only exceptions: uses for medical purposes (e.g. detecting pain in non-communicative patients) or safety purposes (e.g. detecting fatigue in professional drivers).
The AI Act prohibits the creation or expansion of facial-recognition databases through the untargeted collection of images from the internet or CCTV footage. This is a direct response to the practices of companies such as Clearview AI, which had built a database of several billion faces by scraping public photos.
📄AI risk in business: how to identify and manage effectively→Most businesses instinctively think these prohibitions do not concern them. But certain practices, particularly in HR and marketing, can come dangerously close.
Ask yourself these questions:
If the answer to any of these questions is “yes” or “possibly”, a thorough analysis is required.
Staff awareness of these prohibitions is essential. The AI literacy obligation (Article 4) necessarily includes understanding what is prohibited. A staff member who does not know that using emotion recognition in a professional context is illegal cannot report non-compliant use.
Training your teams on these prohibitions is not optional — it is a precondition for any responsible AI governance. In the UK, the ICO and the UK AI Safety Institute have reinforced this message, emphasising that organisations must equip their workforce to identify and report non-compliant AI uses.
📄AI Act Article 4: the AI training obligation explained→Prohibited practices are subject to the highest tier of penalties under the AI Act: up to EUR 35 million or 7% of annual worldwide turnover, whichever is higher. These are the heaviest fines ever set by European digital regulation — higher than those under the GDPR.
Ce que ça implique pour vous
The prohibitions in Article 5 have been in force since 1 February 2025. Social scoring, AI-driven manipulation, emotion recognition in the workplace: these practices are now illegal, with penalties of up to EUR 35 million. The challenge for businesses is twofold: verify immediately that no AI tool in use crosses these red lines, and train your teams to recognise prohibited uses before an incident occurs.