A marketing manager pastes customer data into ChatGPT to draft a campaign. A finance analyst uploads a confidential spreadsheet to an AI tool to generate a summary. A recruiter uses an AI screening tool they found online. None of these tools were approved by IT. None were risk-assessed. None comply with your data protection policies.
This is shadow AI. And it’s happening in your organisation right now.
À retenir
- Shadow AI is the use of AI tools by employees without organisational approval or oversight
- It creates data privacy, security, compliance, and intellectual property risks
- An estimated 60% of enterprise AI usage is shadow AI
- The solution is not to ban AI — it's to provide approved tools, training, and clear policies
Why shadow AI is different from shadow IT
Shadow IT — employees using unapproved software — has been a challenge for decades. But shadow AI is fundamentally different, and far more dangerous, for three reasons:
1. The data exposure is immediate. When an employee pastes text into ChatGPT, that data is processed by a third party. Unlike installing an unapproved app on a laptop, the damage happens instantly and is irreversible.
2. The outputs can be wrong. Shadow IT gives you unapproved software that still works correctly. Shadow AI gives you tools that hallucinate — and employees who don’t know how to verify the output.
3. Regulatory implications are new. The EU AI Act creates specific obligations around AI use. Unmanaged AI usage means unmanaged compliance risk.
60%
of AI usage in enterprises is estimated to be shadow AI — tools used without IT knowledge or approval
Source : Gartner 2025
The real risks
Data leakage
Employees routinely paste sensitive information into AI tools: client data, financial figures, strategic documents, source code, personal data covered by GDPR. Once submitted, this data may be used for model training, stored on external servers, or accessed by the AI provider’s staff.
Compliance violations
Under GDPR, processing personal data through an unapproved AI tool is a data breach. Under the EU AI Act, using AI systems without appropriate governance violates Article 4. Under industry regulations (PRA, FCA, BaFin), uncontrolled AI usage in regulated processes can trigger supervisory action.
Hallucination-driven errors
When employees use AI without training, they’re more likely to trust incorrect outputs. A legal team using AI to draft contracts without understanding hallucination risk. A finance team using AI-generated figures without verification. The liability sits with the organisation, not the AI tool.
Shadow AI isn’t a technology problem. It’s a people problem. Employees use unapproved AI because they want to be more productive and don’t have approved alternatives.
How to manage shadow AI
Banning AI tools doesn’t work. Employees will find workarounds — personal devices, personal accounts, browser-based tools. Instead, take a structured approach:
1. Discover. Audit what AI tools are already in use across the organisation. Network monitoring, browser extension audits, and anonymous surveys all help. The goal isn’t punishment — it’s visibility.
2. Provide approved alternatives. If employees are using ChatGPT because they need an AI writing assistant, give them an approved one with enterprise data controls (ChatGPT Enterprise, Copilot, Claude for Work).
3. Train your people. Most shadow AI happens because employees don’t understand the risks. Training on data handling, hallucination recognition, and responsible AI use eliminates the majority of shadow AI risk.
4. Set clear policies. Publish an AI acceptable use policy. Make it short, practical, and easy to follow. Define what data can and cannot be used with AI tools.
5. Monitor continuously. Shadow AI isn’t a one-time audit. New tools appear weekly. Build ongoing visibility into AI usage patterns.
Start with your highest-risk teams: those handling personal data (HR, customer service), financial data (finance, accounting), or legal data (legal, compliance). These are where shadow AI creates the greatest exposure.
How Brain helps
Brain trains your entire workforce to use AI responsibly. Employees learn to recognise shadow AI risks, handle data safely, identify hallucinations, and follow your organisation’s AI policy — through practical, role-based modules that take minutes, not days.
The result: fewer shadow AI incidents, documented compliance with Article 4 of the EU AI Act, and a workforce that’s productive with AI instead of dangerous with it.
78%
reduction in shadow AI incidents reported by organisations after deploying Brain