In January 2024, the Italian data protection authority (Garante) fined OpenAI €15 million for GDPR violations related to ChatGPT — including failure to identify a lawful basis for processing personal data used in training and inadequate transparency about data processing. It was not the first enforcement action linking AI to GDPR, and it will not be the last.
The intersection of AI and GDPR is where two of the most consequential regulatory frameworks of our era collide. GDPR was designed to protect personal data in a pre-AI world. AI systems are designed to consume, process, and generate insights from data at a scale GDPR’s drafters could barely have imagined. The tension is real, but it is navigable — with the right approach.
À retenir
- Every AI system processing personal data must have a valid lawful basis under GDPR — legitimate interest or consent are most common
- Data Protection Impact Assessments (DPIAs) are mandatory for AI systems that pose high risks to individuals
- Automated decision-making with legal or significant effects triggers specific rights under Article 22
- The EU AI Act and GDPR are complementary frameworks — compliance with one does not guarantee compliance with the other
Why AI creates GDPR challenges
GDPR rests on principles that AI systems inherently test:
- Purpose limitation — personal data should be collected for specified purposes. AI models can repurpose data in ways that stretch original consent or legal basis.
- Data minimisation — organisations should process only the minimum data necessary. Large language models and other AI systems are trained on vast datasets, often including personal data far beyond what any single purpose would justify.
- Transparency — individuals should know how their data is being processed. Many AI models are opaque — even their developers cannot fully explain individual outputs.
- Accuracy — personal data must be accurate and kept up to date. AI systems can generate inaccurate information about identifiable individuals (hallucinations).
- Storage limitation — data should not be kept longer than necessary. AI training data is often retained indefinitely within model weights.
These are not reasons to avoid AI. They are reasons to implement structured compliance processes before deploying AI systems that handle personal data.
€15M
fine issued to OpenAI by Italy's Garante for GDPR violations related to ChatGPT
Source : Garante per la protezione dei dati personali, January 2024
Establishing a lawful basis for AI processing
Under GDPR Article 6, every processing activity requires a lawful basis. For AI systems, the most relevant are:
Consent (Article 6(1)(a))
Consent is the most straightforward legal basis but the most difficult to maintain at scale. For AI, consent must be:
- Specific — covering the specific AI processing activity, not just general data collection
- Informed — the data subject must understand that their data will be processed by AI, how, and for what purpose
- Freely given — no imbalance of power that vitiates genuine choice
- Withdrawable — and withdrawal must be as easy as giving consent
Practical challenges: For AI training data, obtaining specific, informed consent from millions of data subjects is often impractical. For AI inference (using personal data as input to an AI system), consent is more feasible but requires clear communication about AI processing.
Legitimate interest (Article 6(1)(f))
Legitimate interest is the most commonly used basis for AI processing. It requires a three-part test:
- Purpose test — identify the legitimate interest being pursued (e.g., fraud detection, customer service improvement, operational efficiency)
- Necessity test — demonstrate that AI processing is necessary to achieve the purpose (not just convenient)
- Balancing test — weigh the organisation’s interest against the rights and freedoms of data subjects
Document your Legitimate Interest Assessment (LIA). This documentation is essential for demonstrating compliance and is often the first thing regulators request during an investigation.
Contract performance (Article 6(1)(b))
Where AI processing is necessary to perform a contract with the data subject — for example, AI-powered credit assessment as part of a lending application — contract performance may apply. But it must be genuinely necessary, not merely helpful.
Legal obligation (Article 6(1)(c))
Where AI processing is required by law — for example, AI-assisted anti-money laundering screening in financial services — legal obligation provides the basis.
Do not rely on a single lawful basis for all AI processing across your organisation. Different AI systems processing different data for different purposes will require different legal bases. Map each AI system to its specific lawful basis and document the analysis.
Data Protection Impact Assessments for AI
Under GDPR Article 35, a Data Protection Impact Assessment (DPIA) is mandatory when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” The ICO’s guidance identifies several indicators that trigger a DPIA requirement, and most AI systems meet at least one:
- Automated decision-making with legal or significant effects
- Systematic monitoring of individuals
- Processing of special category data (health, biometrics, political opinions)
- Large-scale processing of personal data
- Innovative technology — AI qualifies by default
- Profiling — creating profiles of individuals based on their data
Conducting a DPIA for AI systems
A DPIA for an AI system should cover:
1. Description of processing. What personal data does the AI system process? What is the source? How is it used? What outputs does it produce?
2. Purpose and necessity. Why is AI processing necessary? Could the purpose be achieved with less intrusive means?
3. Risk identification. What risks does the AI system pose to data subjects? Consider:
- Inaccuracy (hallucinations, wrong predictions)
- Bias (discriminatory outcomes)
- Opacity (inability to explain decisions)
- Data breach (unauthorized access to training data or outputs)
- Function creep (data used for purposes beyond original scope)
4. Risk mitigation. What measures are in place to address identified risks? Document technical and organisational measures.
5. Residual risk assessment. After mitigation, what residual risks remain? Are they acceptable?
6. DPO consultation. Your Data Protection Officer must be consulted during the DPIA. If residual risks are high, consultation with the supervisory authority (ICO in the UK) is required before processing begins.
71%
of organisations deploying AI have not conducted a DPIA for their AI systems
Source : IAPP Privacy and AI Governance Report, 2025
Automated decision-making: Article 22
GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing — including profiling — which produce legal effects or similarly significantly affect them.
When Article 22 applies to AI:
- AI-powered recruitment tools that automatically reject candidates
- Credit scoring algorithms that deny loan applications without human review
- Insurance pricing systems that set premiums based entirely on automated profiling
- Employee performance systems that trigger disciplinary actions automatically
What Article 22 requires:
- Right to human intervention — the data subject can request that a human reviews the automated decision
- Right to express their point of view — the data subject can provide additional information or context
- Right to contest the decision — the data subject can challenge the outcome
- Right to meaningful information — the data subject must be told about “the logic involved, as well as the significance and the envisaged consequences” of automated processing
Exceptions: Article 22 allows automated decision-making where it is necessary for a contract, authorised by law, or based on explicit consent. But even where exceptions apply, the rights to information, human intervention, and contestation remain.
The UK GDPR retains Article 22, but the Data Protection and Digital Information Act 2024 has introduced modifications. Under the UK framework, “meaningful human involvement” in AI-assisted decisions may remove the processing from Article 22’s scope — but the threshold for “meaningful” involvement is being tested and clarified by the ICO.
Data subject rights in an AI context
GDPR data subject rights apply fully to AI processing. Organisations must be prepared to respond to:
- Subject Access Requests (SARs) — providing all personal data held, including data processed by AI systems, inferences drawn, and profiling categories assigned
- Right to rectification — correcting inaccurate personal data, including inaccurate AI outputs about identifiable individuals
- Right to erasure — deleting personal data from AI systems, including (where technically feasible) from training data
- Right to restriction — limiting AI processing of personal data while accuracy or legal basis is disputed
- Right to data portability — providing personal data in a structured format for transfer
The technical challenge is significant. Removing personal data from a trained AI model is not as simple as deleting a database record. Organisations must develop processes for handling data subject requests that account for AI’s technical architecture.
The GDPR and EU AI Act intersection
The EU AI Act and GDPR are complementary, not overlapping. The AI Act governs the safety and trustworthiness of AI systems. GDPR governs the processing of personal data. An AI system can be fully compliant with the EU AI Act and still violate GDPR — and vice versa.
Key areas of interaction:
- High-risk AI systems under the EU AI Act that process personal data must comply with both the Act’s data governance requirements (Article 10) and GDPR
- AI literacy under Article 4 should include GDPR awareness for any staff using AI to process personal data
- Transparency requirements under both frameworks must be coordinated — disclosure about AI processing and disclosure about personal data processing
- Risk assessment processes under the AI Act and DPIA requirements under GDPR should be integrated, not duplicated
Organisations need a unified governance approach that addresses both frameworks together.
Practical compliance checklist
For each AI system that processes personal data:
- Identified and documented the lawful basis for processing
- Conducted a DPIA (or documented why one is not required)
- Consulted the DPO
- Assessed whether Article 22 automated decision-making provisions apply
- Implemented mechanisms for human review where required
- Updated privacy notices to include AI processing information
- Established processes for handling SARs related to AI processing
- Documented data sources, retention periods, and sharing arrangements
- Assessed and mitigated bias and accuracy risks
- Implemented data minimisation — processing only what is necessary
- Established processes for data subject rights (rectification, erasure, restriction)
- Reviewed vendor contracts for GDPR-compliant data processing agreements
Integrate your GDPR AI compliance with your shadow AI discovery process. The biggest GDPR risk from AI is not the systems you know about — it is the unapproved tools where employees are pasting personal data without any of these controls in place.
Test your AI GDPR knowledge
Train your teams on AI and data protection with Brain
GDPR compliance is only as strong as the people handling the data. Brain delivers practical training on AI and data protection — covering lawful basis, data handling, automated decision-making rights, and responsible AI use. Modules designed for the intersection of AI and GDPR, with completion tracking and audit-ready documentation.
Explore our plans to get started.
Related articles
AI GDPR Compliance: Guide for Data Controllers (2026)
Ensure your AI systems meet GDPR requirements. Covers lawful basis, DPIAs, Article 22, data subject rights, and vendor management.
AI Copyright for Business: 7 IP Risks to Address Now
Who owns AI-generated content? Navigate copyright, training data rights, fair use, and employee IP policies with this practical compliance framework.
AI Data Governance: 6 Pillars for Trustworthy AI (2026)
Build trustworthy AI with proper data quality, lineage, privacy, and consent frameworks. Includes EU AI Act data requirements and compliance steps.