A mid-sized professional services firm in Birmingham rolls out ChatGPT Enterprise to its 300 staff. The IT team configures the data processing agreement. The legal team reviews the terms. Management sends an email saying the tool is available.
Within a week, a consultant pastes a client’s personal data into the tool to generate a report summary. An HR manager uploads a performance review. A partner dictates confidential case notes. Nobody has conducted a Data Protection Impact Assessment. Nobody has updated the firm’s privacy notices. Nobody has assessed the lawful basis for processing.
The firm has just created multiple potential GDPR violations — not through malice or negligence, but through the gap between deploying an AI tool and understanding its data protection implications.
This gap is the defining challenge of AI and data privacy. And it is far more common than most organisations realise.
À retenir
- AI tools that process personal data trigger GDPR obligations — including lawful basis, transparency, and data minimisation
- Data Protection Impact Assessments (DPIAs) are legally required for most AI systems that process personal data
- The ICO has issued specific guidance on AI and data protection, with enforcement actions increasing
- Employee monitoring via AI raises particular data protection concerns that require careful legal analysis
How AI creates data protection challenges
The personal data problem
AI tools process data. When that data is — or contains — personal data, GDPR applies. The challenge is that personal data enters AI systems in ways organisations often do not anticipate:
- Direct input. Employees paste personal data into AI tools — client names, email addresses, financial details, health information, performance data
- Training data. AI models may have been trained on datasets containing personal data, raising questions about lawful basis and data subject rights
- Output generation. AI may generate content that contains or reveals personal data — even data that was not explicitly provided as input
- Inference. AI can infer personal information — health conditions, financial status, political views — from non-personal data, creating new personal data through analysis
This last point is particularly important. GDPR applies not just to data that is obviously personal, but to any data from which an individual can be identified — directly or indirectly. AI’s ability to make inferences means that data which appears anonymised may, in practice, become personal data through AI analysis.
73%
of organisations using AI tools have not conducted a DPIA for at least one AI system processing personal data
Source : ICO Technology Survey, 2025
The transparency challenge
GDPR requires that individuals know how their data is being processed. When AI is involved, this creates specific transparency challenges:
- Automated decision-making. Article 22 of GDPR gives individuals the right not to be subject to decisions based solely on automated processing that significantly affects them. AI-driven decisions about credit, employment, insurance, or access to services may trigger this provision.
- Meaningful information. When automated decision-making applies, organisations must provide “meaningful information about the logic involved.” Explaining how a complex AI model reaches a decision is not straightforward — the “black box” problem is not just a technical challenge, it is a legal one.
- Privacy notices. Existing privacy notices rarely cover AI processing adequately. If your organisation is using AI to process personal data, your privacy notice almost certainly needs updating.
GDPR principles and AI: a practical mapping
Lawful basis
Every processing activity requires a lawful basis under GDPR Article 6. For AI processing, the most commonly relied-upon bases are:
Legitimate interest (Article 6(1)(f)). The most commonly cited basis for AI processing in business contexts. But legitimate interest is not a blanket justification — it requires a three-part test: identifying the legitimate interest, demonstrating necessity, and conducting a balancing test against the data subject’s rights and freedoms.
Consent (Article 6(1)(a)). Appropriate where individuals have genuine choice and control. Rarely suitable for employee data processing (due to power imbalance) or for AI tools where the full scope of processing is difficult to explain meaningfully.
Contract performance (Article 6(1)(b)). May apply where AI processing is genuinely necessary to perform a contract — for example, AI-powered fraud detection in payment processing.
For special category data (health, biometric, ethnic origin, political opinions), the additional conditions under Article 9 apply, and the threshold is significantly higher.
The ICO has been explicit: “We do not accept that legitimate interest can be used as a default basis for all AI processing.” Each use case must be assessed individually, with a documented Legitimate Interest Assessment that demonstrates genuine balancing of interests.
Data minimisation
GDPR requires that personal data processing is adequate, relevant, and limited to what is necessary. AI tools, by design, often want more data rather than less — more training data, more context, more input to generate better outputs.
This creates a direct tension. Organisations must:
- Configure AI tools to process the minimum personal data necessary for the specific purpose
- Strip or pseudonymise personal data before inputting it into AI tools where possible
- Implement technical controls that prevent unnecessary data from being processed
- Review AI tool configurations regularly to ensure data minimisation is maintained
Purpose limitation
Personal data must be processed only for specified, explicit, and legitimate purposes. Using personal data collected for one purpose (e.g., customer service) in an AI system for a different purpose (e.g., marketing analytics) without a compatible lawful basis is a GDPR violation.
Storage limitation
AI tools may retain data — inputs, outputs, conversation logs — for longer than necessary. Organisations must understand and control how long AI tools store personal data, and ensure that retention aligns with their data retention policies.
Data Protection Impact Assessments (DPIAs)
When are DPIAs required?
Under Article 35 of GDPR, a DPIA is mandatory when processing is likely to result in a “high risk” to individuals’ rights and freedoms. The ICO’s guidance makes clear that this includes:
- Large-scale processing of personal data — most enterprise AI deployments qualify
- Automated decision-making with legal or significant effects — AI-driven recruitment, credit scoring, insurance pricing
- Systematic monitoring — AI-powered employee monitoring, CCTV analytics, behaviour tracking
- New technologies — the ICO specifically identifies AI as a “new technology” that typically triggers the DPIA requirement
In practice, if your AI tool processes personal data at any meaningful scale, a DPIA is almost certainly required.
40%
increase in ICO enforcement actions related to AI and automated decision-making between 2024 and 2025
Source : ICO Annual Report, 2025
What a DPIA for AI should cover
A DPIA for an AI system should address:
- Description of processing. What data is processed, how, by whom, and for what purpose. Include data flows — where does data go, who has access, is it transferred internationally?
- Necessity and proportionality. Why AI is necessary for this purpose, and whether the same outcome could be achieved with less data or less intrusive means.
- Risk identification. What risks does the AI processing create for data subjects? Consider accuracy, bias, discrimination, data breaches, function creep, and loss of control.
- Risk mitigation. What measures are in place to address identified risks? Technical controls, governance processes, human oversight, audit mechanisms.
- Consultation. Where risks remain high after mitigation, consultation with the ICO may be required before processing begins.
For a broader view of AI risk assessment beyond data protection, see our AI risk assessment guide.
Employee monitoring and AI
AI-powered employee monitoring is one of the most sensitive areas of AI and data privacy. The ICO has issued specific guidance on monitoring at work, and the intersection with AI raises particular concerns.
What constitutes AI employee monitoring?
- Productivity tracking tools that use AI to analyse work patterns
- Email and communication monitoring with AI-powered content analysis
- Keystroke logging and screen capture with AI analysis
- AI-driven performance scoring and management
- Sentiment analysis of employee communications
- AI-powered video surveillance and behaviour analysis
ICO position
The ICO’s Employment Practices Code and subsequent AI guidance make clear that:
- Employee monitoring must be proportionate — monitoring everything because you can is not a lawful approach
- Transparency is essential — employees must know they are being monitored, what data is collected, and how it is used
- DPIAs are required for most AI-powered monitoring
- Covert monitoring is justified only in exceptional circumstances (suspected criminal activity)
- Workers have a reasonable expectation of privacy even in the workplace
The ICO has signalled that AI-powered employee monitoring is a priority area for enforcement. Organisations deploying productivity tracking, communication monitoring, or performance scoring AI should ensure they have conducted a thorough DPIA and can demonstrate proportionality and transparency.
Practical recommendations
- Conduct a DPIA before deploying any AI monitoring tool
- Consult with employee representatives
- Be transparent about what is monitored and why
- Implement data minimisation — monitor only what is genuinely necessary
- Provide employees with access to data held about them
- Review monitoring regularly to ensure ongoing proportionality
ICO guidance on AI
The ICO has published extensive guidance on AI and data protection. Key documents include:
- Guidance on AI and Data Protection — the ICO’s comprehensive framework for applying GDPR to AI systems
- Explaining Decisions Made with AI — guidance on transparency and explainability obligations
- Guidance on Monitoring at Work — specific provisions for AI-powered employee monitoring
- Regulatory sandbox outcomes — published findings from organisations that tested AI approaches in the ICO’s regulatory sandbox
The ICO has also established an AI and Digital team focused specifically on AI-related data protection issues, signalling that enforcement in this area will intensify.
Cross-border considerations
AI tools are typically provided by companies outside the UK — predominantly US-based. This raises international data transfer issues under UK GDPR:
- Adequacy decisions. The UK has granted adequacy to EU/EEA countries and several others, but the US is not covered by a blanket adequacy decision. The UK-US Data Bridge provides a mechanism, but organisations must verify that their specific AI provider is certified.
- Standard Contractual Clauses. Where adequacy does not apply, SCCs are the primary transfer mechanism. Data processing agreements with AI providers must include appropriate SCCs.
- Transfer Risk Assessments. Organisations must assess whether the legal framework in the recipient country provides adequate data protection. For US-based AI providers not certified under the Data Bridge, a Transfer Risk Assessment is required.
For organisations also subject to the EU AI Act, additional obligations apply. See our guide to the EU AI Act and our AI and GDPR compliance guide.
Test your AI privacy knowledge
Manage AI data privacy with Brain
Brain is the AI training platform that helps organisations build data protection awareness across teams using AI tools. Practical modules covering GDPR obligations, DPIA requirements, shadow AI risks, and responsible data handling — with completion tracking for governance documentation.
Whether you need to prepare teams for AI adoption across the workplace or ensure compliance with ICO guidance, Brain gets your teams ready. Explore our plans to get started.
Related articles
AI Bias at Work: 7 Types + How to Detect Them
Algorithmic bias affects hiring, lending and marketing at scale. 7 types of AI bias, real-world cases and EU AI Act requirements explained.
AI Bias: Amazon, Apple Card + 5 Prevention Steps
AI bias with real cases — Amazon hiring tool, Apple Card, UK welfare. Types of algorithmic bias, detection methods and prevention checklist.
AI Image Generation Copyright 2026: Rulings, Policies & News
What's legal with AI-generated images in 2026: latest court rulings, Midjourney/OpenAI policies, and clear IP guidelines for business use.