The European Data Protection Board’s 2025 enforcement report recorded over €2.1 billion in GDPR fines — a growing share of which involve AI systems processing personal data without adequate safeguards. For data controllers deploying AI, GDPR compliance is not optional, and it is not something that can be bolted on after deployment.
This guide covers what data controllers need to know about AI GDPR compliance: lawful basis selection, Data Protection Impact Assessments, automated decision-making under Article 22, data subject rights, and vendor management for third-party AI tools.
À retenir
- Data controllers bear primary GDPR responsibility for AI systems — even when the AI is provided by a third-party vendor
- Each AI processing activity requires its own documented lawful basis under Article 6
- DPIAs are mandatory for most AI systems and must be completed before processing begins
- Article 22 restricts fully automated decisions with legal or significant effects — human oversight is often required
- Vendor contracts must include GDPR-compliant data processing agreements with clear allocation of responsibilities
Lawful basis for AI processing: getting it right
Every AI system that processes personal data needs a lawful basis under GDPR Article 6. The choice matters because it determines what obligations you carry and what rights data subjects can exercise.
Legitimate interest
For most enterprise AI use cases — fraud detection, customer service automation, internal analytics — legitimate interest (Article 6(1)(f)) is the most practical basis. But it requires a documented Legitimate Interest Assessment (LIA) covering three tests:
- Purpose test. What specific business interest does the AI processing serve?
- Necessity test. Is AI processing genuinely necessary for that purpose, or could you achieve the same result with less intrusive means?
- Balancing test. Do the data subjects’ rights and expectations outweigh your interest?
The balancing test is where most organisations stumble. If your AI system profiles employees or customers in ways they would not reasonably expect, legitimate interest is unlikely to hold up under scrutiny.
Consent
Consent works well for specific, bounded AI processing — for example, opting into an AI-powered recommendation engine. It does not work well for AI training on large datasets, where obtaining and managing specific, informed consent from every data subject is impractical.
Remember: under GDPR, consent must be freely given, specific, informed, and unambiguous. “We may use AI” buried in a privacy policy does not meet that standard.
Contract performance and legal obligation
Where AI processing is necessary to perform a contract (e.g., AI-assisted underwriting for an insurance policy the data subject has applied for) or to comply with a legal obligation (e.g., AI-driven anti-money laundering screening), these bases may apply. The key word is “necessary” — not merely convenient or efficient.
Map every AI system in your organisation to a specific lawful basis. Do not apply a blanket legitimate interest claim across all AI processing. Regulators expect granular, documented analysis for each processing activity. See our guide to AI governance frameworks for structuring this process.
€2.1B
in GDPR fines recorded in 2025, with AI-related enforcement actions rising sharply
Source : EDPB Annual Enforcement Report, 2025
Data Protection Impact Assessments for AI
Under Article 35, a DPIA is required when processing is likely to result in a high risk to individuals. AI systems almost always meet this threshold due to their use of innovative technology, automated decision-making, or large-scale data processing.
When a DPIA is required
The ICO and EDPB have made clear that a DPIA is mandatory for AI systems that involve:
- Automated decision-making with legal or significant effects
- Systematic monitoring or profiling of individuals
- Processing of special category data (health, biometrics, race)
- Large-scale processing of personal data
- Combining datasets from multiple sources
What a good AI DPIA covers
A DPIA for an AI system should go beyond a standard data protection assessment:
Data flows and sources. Where does the personal data come from? How is it pre-processed? What data enters the model, and what outputs are generated?
Proportionality. Could the objective be achieved with less personal data, with anonymised data, or without AI at all?
Bias and accuracy risks. AI systems can produce discriminatory outcomes or inaccurate results about identifiable individuals. Your DPIA must assess these risks and document mitigations — regular bias audits, accuracy monitoring, human review processes.
Transparency measures. How will data subjects be informed about AI processing? Can you explain, in plain language, how the AI system uses their data?
Residual risk. After mitigations, what risks remain? If residual risk is high, you must consult your supervisory authority before processing begins.
Conduct DPIAs before deployment, not after. A retrospective DPIA is better than none, but regulators view it as evidence that data protection was an afterthought. For a structured approach to AI risk assessment, integrate your DPIA process with your broader risk framework.
Automated decision-making under Article 22
Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significantly affect them. This is one of the most practically consequential provisions for AI deployments.
What triggers Article 22
Article 22 applies when three conditions are met:
- The decision is based solely on automated processing — no meaningful human involvement
- The processing includes profiling or automated evaluation of personal aspects
- The decision produces legal effects (e.g., denial of credit, termination of a contract) or similarly significantly affects the individual (e.g., differential pricing, recruitment rejection)
Common AI systems caught by Article 22 include credit scoring algorithms, automated recruitment screening, insurance pricing engines, and AI-powered HR tools that make performance-based decisions.
Compliance requirements
Where Article 22 applies, organisations must:
- Inform data subjects that automated decision-making is taking place, including meaningful information about the logic involved
- Provide a mechanism for human review — a qualified person must be able to review and override automated decisions
- Allow data subjects to contest decisions and express their point of view
- Implement safeguards against bias and error in the automated system
“Meaningful human involvement” is the critical test. A human rubber-stamping automated decisions does not satisfy Article 22. The reviewer must have the authority, competence, and information to genuinely evaluate and override the AI’s output. The ICO has emphasised this point in its AI and data protection guidance.
Data subject rights in AI systems
GDPR data subject rights apply in full to AI processing, but they create unique technical and operational challenges.
Subject Access Requests (SARs). Data subjects can request all personal data held about them — including inferences drawn by AI systems, profiling categories assigned, and scores generated. Your SAR process must cover AI outputs, not just input data.
Right to rectification. If an AI system generates inaccurate information about an identifiable individual — a wrong credit score, an incorrect risk classification — the data subject can demand correction. This requires mechanisms to update or override AI outputs.
Right to erasure. Deleting personal data from a trained model is technically complex. Where data has been incorporated into model weights, full erasure may require retraining. Document your approach and be transparent about limitations.
Right to object. Data subjects can object to processing based on legitimate interest, including AI profiling. You must stop processing unless you can demonstrate compelling legitimate grounds that override the data subject’s interests.
For organisations managing AI compliance at enterprise scale, building automated workflows for these rights requests is essential.
340%
increase in AI-related data subject complaints to EU supervisory authorities between 2023 and 2025
Source : EDPB AI Enforcement Trends Report, 2025
Vendor management for third-party AI
Most organisations do not build their own AI systems. They procure them — from cloud AI platforms, SaaS tools with embedded AI, and specialist AI vendors. As a data controller, your GDPR obligations do not diminish because a vendor processes the data.
Data processing agreements
Every third-party AI vendor processing personal data on your behalf must have a GDPR-compliant Data Processing Agreement (DPA) under Article 28. Key provisions to negotiate:
- Purpose limitation. The vendor may only process personal data for your specified purposes — not for training their own models on your data
- Sub-processors. The vendor must disclose all sub-processors and obtain your approval before engaging new ones
- Data location. Where is personal data processed and stored? International transfers require appropriate safeguards (Standard Contractual Clauses, adequacy decisions)
- Audit rights. You must have the right to audit the vendor’s data protection practices
- Breach notification. The vendor must notify you of data breaches without undue delay
Due diligence questions for AI vendors
Before onboarding any AI vendor that will process personal data, ask:
- Does the vendor use customer data to train or improve their AI models? If so, can you opt out?
- Where is data processed, and are appropriate transfer mechanisms in place?
- Can the vendor support your DPIA process with technical documentation?
- How does the vendor handle data subject rights requests?
- What security measures protect personal data in transit and at rest?
The biggest GDPR risk from AI is often not your sanctioned tools — it is shadow AI. Employees using unapproved AI tools and pasting personal data into public LLMs creates uncontrolled data processing with no lawful basis, no DPA, and no DPIA. An AI policy and training programme are your first line of defence.
Bringing it together: a controller’s action plan
For data controllers building AI GDPR compliance into their organisations:
- Map your AI landscape. Identify every AI system that processes personal data — including embedded AI in SaaS tools
- Assign lawful bases. Document a specific lawful basis for each AI processing activity
- Conduct DPIAs. Complete assessments for all high-risk AI processing before deployment
- Audit Article 22 exposure. Identify where fully automated decisions affect individuals and implement human review
- Update privacy notices. Ensure transparency about AI processing across all touchpoints
- Review vendor contracts. Confirm DPAs are in place and fit for purpose for AI processing
- Build rights processes. Establish workflows to handle SARs, rectification, erasure, and objection requests for AI systems
- Train your teams. Ensure everyone handling personal data in AI systems understands their GDPR obligations — from developers to business users
For a broader view of how the EU AI Act interacts with GDPR, and how to build a unified governance framework, see our dedicated guides.
Test your AI GDPR knowledge
Prepare your teams for AI and data protection with Brain
GDPR compliance depends on the people who handle personal data every day. Brain delivers practical training on AI and data protection — covering lawful basis selection, DPIA processes, automated decision-making rights, vendor due diligence, and responsible AI use. Modules built for data controllers navigating AI GDPR requirements, with completion tracking and audit-ready reporting.
Explore our plans to find the right fit.
Related articles
AI and GDPR: Compliance Checklist for 2026
Stay compliant when deploying AI under GDPR and UK GDPR. Covers lawful basis, DPIAs, automated decisions, and data subject rights.
AI Data Privacy: GDPR & AI Act Compliance Guide (2026)
Align AI with data protection law. Covers DPIA triggers, consent requirements, data minimisation, employee data risks, and vendor due diligence.
AI Compliance Automation: Cut Costs + Reduce Risk
Automate regulatory compliance with AI — cut costs, reduce manual errors and lower risk. Tools, frameworks and implementation strategies.