Every AI system is, at its core, a data-processing system. Whether your teams use a chatbot to draft client communications, an analytics engine to forecast demand, or an automated workflow to screen CVs, personal data is almost certainly involved. And where personal data is involved, data protection law applies — regardless of how innovative the technology might be.
The challenge is that AI data privacy does not fit neatly into the frameworks organisations built for traditional software. AI tools ingest data in unpredictable ways, retain it in opaque pipelines, and generate outputs that may themselves constitute new personal data. This guide sets out what every organisation needs to know.
À retenir
- GDPR and the EU AI Act create overlapping obligations — organisations must comply with both when deploying AI that processes personal data
- Data Protection Impact Assessments (DPIAs) are legally required for most AI systems and must be completed before deployment, not after
- Consent is rarely the right lawful basis for AI processing — legitimate interest requires a documented balancing test
- Vendor due diligence is now a regulatory expectation, not just a procurement best practice
Where GDPR and the EU AI Act intersect
GDPR has governed personal data processing since 2018. The EU AI Act adds a new regulatory layer specifically targeting AI systems. Organisations deploying AI must now navigate both frameworks simultaneously.
The key areas of overlap:
- Risk classification. The AI Act categorises systems by risk level. High-risk AI systems — those used in employment, credit scoring, education, or law enforcement — face the strictest requirements under both the AI Act and GDPR.
- Transparency. GDPR requires organisations to tell individuals how their data is used. The AI Act adds specific transparency obligations for AI systems, including disclosure that content was AI-generated and technical documentation requirements.
- Human oversight. Both frameworks require meaningful human involvement in consequential decisions. GDPR Article 22 gives individuals the right not to be subject to purely automated decisions with significant effects. The AI Act mandates human oversight mechanisms for high-risk systems.
- Accountability. GDPR requires documented processing records. The AI Act requires conformity assessments, logging, and ongoing monitoring. Together, they demand a comprehensive governance structure.
68%
of European organisations report gaps in their ability to comply with both GDPR and the EU AI Act simultaneously
Source : IAPP AI Governance Report, 2025
For organisations subject to UK AI regulation, the picture differs slightly — the UK has not adopted the EU AI Act — but the data protection obligations under UK GDPR remain equally demanding.
DPIAs for AI: when and how
A Data Protection Impact Assessment is not optional for most AI deployments. Under GDPR Article 35, a DPIA is mandatory when processing is likely to result in a high risk to individuals’ rights and freedoms. AI systems routinely meet this threshold.
When a DPIA is required
You almost certainly need a DPIA if your AI system involves:
- Large-scale processing of personal data — most enterprise AI tools qualify by default
- Automated decision-making with legal or similarly significant effects — recruitment screening, credit decisions, insurance pricing
- Systematic monitoring — employee productivity tools, behavioural analytics, AI-powered CCTV
- New technologies — regulators explicitly identify AI as triggering this criterion
- Special category data — health, biometric, ethnic, or political data processed by AI systems
How to conduct a DPIA for AI
A meaningful DPIA goes beyond a compliance tick-box. For AI systems, it should address:
- Data flows. Map exactly what personal data enters the system, where it is stored, who can access it, and whether it is transferred internationally. AI pipelines are often more complex than traditional software — data may flow through multiple models, APIs, and cloud environments.
- Necessity and proportionality. Document why AI is necessary for this purpose. Could the same outcome be achieved with less data or a less intrusive approach?
- Risk identification. Consider bias and discrimination, inaccurate outputs, data breaches, function creep, and loss of individual control. For a deeper framework, see our AI risk assessment guide.
- Mitigation measures. Technical controls (anonymisation, access restrictions, audit logs), governance processes (human review, escalation procedures), and organisational measures (training, policies, accountability structures).
- Ongoing review. A DPIA is not a one-off document. AI systems evolve — models are updated, use cases expand, data sources change. Your DPIA must be reviewed regularly.
Conducting a DPIA after deploying an AI system does not satisfy GDPR. The assessment must be completed before processing begins. Organisations that deploy first and assess later are already non-compliant — and regulators are increasingly willing to enforce on this point.
Consent, legitimate interest, and lawful basis
Every use of AI that processes personal data requires a lawful basis under GDPR Article 6. Getting this right is one of the most common stumbling blocks.
Why consent is rarely appropriate for AI
Consent must be freely given, specific, informed, and unambiguous. For AI processing, these conditions are difficult to meet:
- Specificity. AI tools often process data in ways that are hard to describe with precision. Broad consent (“we may use your data in AI systems”) is not valid GDPR consent.
- Informed. Meaningful consent requires individuals to understand what they are consenting to. The complexity of AI processing makes genuine informed consent challenging.
- Freely given. In employment contexts, consent is almost never freely given due to the power imbalance between employer and employee. Using consent as the lawful basis for AI processing of employee data is not advisable.
- Withdrawal. Individuals can withdraw consent at any time. For AI systems where data has already been used for training or inference, implementing withdrawal is technically complex.
Legitimate interest: the balancing test
Most organisations rely on legitimate interest (Article 6(1)(f)) for AI processing in business contexts. This requires a documented three-part assessment:
- Purpose test. Identify the specific legitimate interest — “improving efficiency” is too vague; “reducing invoice processing time by automating data extraction from supplier invoices” is specific enough.
- Necessity test. Demonstrate that the AI processing is genuinely necessary to achieve that interest and that no less intrusive alternative exists.
- Balancing test. Weigh the organisation’s interest against the data subjects’ rights and freedoms. Consider the nature of the data, the expectations of the individuals, the impact on them, and the safeguards in place.
This assessment must be documented. Regulators expect to see it upon request. For more on building a compliance framework, see our AI governance guide.
Data minimisation in practice
GDPR’s data minimisation principle — processing only what is adequate, relevant, and necessary — sits in direct tension with AI’s appetite for data. More data typically means better model performance. But “better performance” is not a legal basis for processing unnecessary personal data.
82%
of employees who use AI tools at work have entered personal data (their own or others') into an AI system without checking whether it was necessary
Source : Brain AI Readiness Survey, 2026
Practical data minimisation measures for AI include:
- Input controls. Configure AI tools to reject or flag personal data that is not required for the task. Some enterprise AI platforms allow administrators to set data input policies.
- Pseudonymisation. Replace identifiable data with pseudonyms before AI processing where the task does not require real identities.
- Aggregation. Use aggregated or statistical data rather than individual-level data where possible — particularly for analytics and reporting use cases.
- Access restrictions. Limit which teams and individuals can input personal data into AI tools. Not everyone needs access to the full dataset.
- Output review. Check AI outputs for personal data that should not have been included — AI systems can surface data that was not explicitly requested.
Employee data: the highest-risk area
AI processing of employee data is where data privacy risks are most acute. Employees have limited ability to opt out, power imbalances make consent problematic, and the potential for harm — to careers, wellbeing, and dignity — is significant.
Key risk areas include:
- Productivity monitoring. AI tools that track keystrokes, screen activity, or application usage create extensive personal data profiles. These require rigorous DPIAs and transparent communication.
- Performance analytics. AI systems that score or rank employee performance based on data analysis raise concerns about accuracy, bias, and fairness.
- Recruitment screening. AI tools that filter CVs or assess candidates must be tested for bias and comply with automated decision-making provisions.
- Sentiment analysis. Analysing employee communications for sentiment or engagement is particularly intrusive and difficult to justify under data minimisation principles.
Organisations must ensure that employees know what AI tools are in use, what data is processed, and how decisions based on that data are made. For a broader perspective on preparing teams, see our AI training guide.
Vendor due diligence for AI tools
When your organisation uses a third-party AI tool, you remain responsible for the personal data processed through it. Vendor due diligence is not just good procurement practice — it is a regulatory obligation.
What to assess
Before deploying any third-party AI tool that will process personal data:
- Data processing agreement. Ensure a compliant DPA is in place that covers AI-specific processing activities, data retention, sub-processors, and international transfers.
- Data residency. Understand where data is stored and processed. AI tools often route data through multiple jurisdictions. If data leaves the UK or EU, appropriate transfer mechanisms (adequacy decisions, Standard Contractual Clauses) must be in place.
- Model training. Clarify whether the vendor uses your data to train its models. Many AI providers do by default unless you opt out. This constitutes a separate processing activity that requires its own lawful basis.
- Security measures. Assess the vendor’s technical and organisational security measures. AI systems can be targets for data extraction attacks, prompt injection, and model manipulation.
- Sub-processors. AI tools frequently rely on third-party infrastructure and APIs. Map the full chain of data processors and ensure each link meets GDPR standards.
- Incident response. Confirm the vendor’s breach notification process aligns with GDPR’s 72-hour reporting requirement.
The EU AI Act introduces additional obligations for providers and deployers of AI systems. If your vendor provides a high-risk AI system, they must supply conformity documentation, maintain logging, and support your oversight obligations. Factor these requirements into vendor contracts now — retrofitting them later is significantly more difficult. See our GDPR and AI compliance guide for detailed contractual guidance.
Building an AI data privacy framework
AI data privacy is not a single project — it is an ongoing programme. Organisations that treat it as a one-off compliance exercise will fall behind as regulations evolve and AI capabilities expand.
A robust framework includes:
- Policy. A clear AI policy that addresses data protection obligations, acceptable use, and escalation procedures
- Training. Regular, practical training so that every employee who uses AI tools understands their data protection responsibilities — not just the legal team
- Governance. A defined governance structure with clear accountability for AI data protection decisions
- Assessment. Systematic DPIAs for all AI systems, reviewed at regular intervals and when systems change
- Monitoring. Ongoing monitoring of AI tool usage, data flows, and compliance — including detection of shadow AI use
- Vendor management. Continuous due diligence on AI vendors, not just at procurement stage
Test your AI data privacy knowledge
Get your teams ready with Brain
Brain helps organisations build practical AI data privacy awareness across every team. Our modules cover GDPR obligations for AI, DPIA processes, shadow AI risks, responsible data handling, and vendor assessment — with completion tracking for compliance documentation.
Whether you are preparing for AI Act compliance or strengthening your existing data protection practices, Brain gets your organisation ready.
Related articles
AI Compliance Automation: Cut Costs + Reduce Risk
Automate regulatory compliance with AI — cut costs, reduce manual errors and lower risk. Tools, frameworks and implementation strategies.
AI Compliance Monitoring: Automate Oversight (2026)
Automate regulatory oversight with AI compliance monitoring — tools, frameworks and implementation guide for enterprise teams.
AI Compliance Training: Meet Article 4 Requirements
Why traditional compliance training fails for AI — and how adaptive learning, Article 4 alignment and real assessments close the gap.