Your organisation almost certainly has employees using AI tools without approval. They are pasting customer data into ChatGPT, uploading spreadsheets to AI summarisers, and running code through AI assistants — all outside your security perimeter. This is shadow AI, and no amount of verbal warnings will stop it. You need a written, enforceable policy.
The problem? Most shadow AI policies either gather dust or drive usage underground. This guide gives you a template that balances control with practicality — one your employees will actually follow.
À retenir
- Shadow AI policies fail when they ban AI outright — employees simply find workarounds
- An effective policy needs 8 core sections covering scope, classification, approved tools, and enforcement
- Policy alone is not enough — you need training, monitoring, and approved alternatives
- Aligning your policy with the EU AI Act and GDPR protects against regulatory exposure
Why most shadow AI policies fail
Organisations typically respond to shadow AI in one of two ways: they ignore it entirely, or they issue a blanket ban. Both approaches fail.
The ignore approach leaves the organisation exposed. Without a policy, employees assume AI usage is permitted. Data leaks happen. GDPR violations accumulate. When regulators come asking, there is no documented governance to point to.
The ban approach drives usage underground. Research consistently shows that prohibiting AI tools does not reduce usage — it simply pushes it onto personal devices and accounts where the organisation has zero visibility. Shadow AI becomes invisible shadow AI, which is worse.
75%
of knowledge workers report they would continue using AI tools even if their organisation banned them
Source : Microsoft Work Trend Index 2025
The only approach that works is a structured policy that acknowledges reality: your people want to use AI, and your job is to make that safe rather than impossible.
What is a shadow AI policy?
A shadow AI policy is a formal document that defines how employees may and may not use AI tools within the organisation. Unlike a general AI governance framework, a shadow AI policy specifically addresses the gap between sanctioned and unsanctioned AI usage.
It answers three questions:
- What AI tools are approved, and for what purposes?
- What data can and cannot be processed through AI tools?
- What happens when someone violates the policy?
A good shadow AI policy does not read like a legal document. It reads like a practical guide that helps employees make the right decision in the moment.
The 8-section shadow AI policy template
Here is the structure every shadow AI policy needs. Adapt the detail to your organisation, but do not skip any section.
1. Purpose and scope
State why the policy exists and who it applies to. Be explicit: this covers all employees, contractors, consultants, and temporary staff. It covers all AI tools — not just ChatGPT, but image generators, code assistants, AI-powered browser extensions, and any tool that uses machine learning to process inputs.
2. Definitions
Define key terms: shadow AI, approved AI tools, generative AI, AI-assisted decision-making. Do not assume your workforce knows what these mean. If you need a primer, point them to your AI training programme.
3. Data classification for AI usage
This is the most critical section. Create a clear, simple classification:
- Never share with AI tools: Personal data (GDPR-covered), financial results before publication, client confidential data, source code, trade secrets, authentication credentials.
- May share with approved tools only: Internal documents, aggregated statistics, anonymised datasets, publicly available information.
- Freely usable: Published content, general knowledge queries, publicly available data.
If employees have to think hard about whether data is safe to use, the classification is too complex. Keep it to three tiers maximum.
4. Approved tools and usage guidelines
Maintain a living list of approved AI tools, updated at least quarterly. For each tool, specify:
- What it is approved for (e.g., “writing assistance”, “code review”, “data analysis”)
- What data classification levels it can handle
- Whether enterprise data protections are enabled (e.g., opt-out of training)
- Who has access
This section should reference your broader AI policy if one exists.
5. Prohibited activities
Be specific about what is not permitted. Vague prohibitions like “do not misuse AI” are unenforceable. Instead, list concrete examples:
- Uploading client data to any non-approved AI tool
- Using AI to make hiring, lending, or insurance decisions without human oversight
- Bypassing data controls by using personal accounts or devices
- Using AI-generated content without disclosure where required
6. Risk assessment process
Define how new AI tools get evaluated and approved. Employees need a clear path to request a new tool — otherwise they will just use it without asking. Your AI risk assessment process should take days, not months.
Create a simple request form: tool name, intended use, data types involved, number of users. If you can evaluate and respond within two weeks, employees are far less likely to go rogue.
7. Compliance and regulatory alignment
Map your policy to the regulatory frameworks that apply to your organisation:
- EU AI Act — Article 4 obligations require AI literacy across the organisation. Your policy must demonstrate governance of AI usage.
- GDPR — Any AI processing of personal data requires a lawful basis, data processing agreement, and potentially a DPIA.
- Industry regulations — Financial services, healthcare, and legal sectors have additional requirements. Reference the NIST AI Framework or ISO 42001 if your organisation follows these standards.
8. Enforcement and consequences
Define what happens when the policy is violated. Graduated consequences work best:
- First violation (unintentional): Mandatory AI awareness training session.
- Repeated violations: Escalation to management, restricted AI access.
- Serious breach (data leak): Disciplinary process, potential regulatory notification.
The goal is not punishment — it is behaviour change. Most violations stem from ignorance, not malice.
A policy without consequences is a suggestion. A policy with disproportionate consequences drives behaviour underground. Calibrate your enforcement to encourage transparency, not secrecy.
How to enforce your shadow AI policy
Writing the policy is the easy part. Making it stick requires three things:
Training at scale
Every employee needs to understand the policy — not just read it. Role-based training that shows a marketer, a finance analyst, and an HR professional what the policy means for their specific workflows is vastly more effective than a generic compliance slide deck. This is exactly what AI readiness programmes are designed to deliver.
Continuous monitoring
Deploy technical controls alongside the policy:
- Network monitoring to detect traffic to known AI services
- Browser extension audits to identify AI-powered plugins
- DLP (Data Loss Prevention) rules updated for AI-specific patterns
- Regular anonymous surveys to gauge actual AI usage honestly
Monitoring without training creates a surveillance culture. Training without monitoring creates a false sense of security. You need both.
Regular policy reviews
AI moves fast. A policy written in January may be outdated by April. Review your shadow AI policy quarterly. Track which tools employees are requesting, which violations are occurring, and whether the approved tools list meets actual needs.
3x
faster reduction in shadow AI incidents when organisations combine policy with hands-on training versus policy alone
From policy to culture
The ultimate goal is not a document — it is a culture where employees instinctively make good decisions about AI usage. That requires:
- Leadership buy-in — executives must visibly follow the same policy
- Approved tools that are genuinely useful — if the approved tools are worse than ChatGPT, employees will use ChatGPT
- Ongoing education — the AI landscape changes monthly, and your workforce’s AI competencies need to keep pace
- Psychological safety — employees should feel comfortable reporting shadow AI usage without fear of punishment
How Brain helps
Brain trains your entire workforce to understand and follow your shadow AI policy — through practical, role-based exercises that take minutes, not days. Employees learn to classify data correctly, choose approved tools, spot shadow AI risks in the enterprise, and respond to real-world scenarios specific to their role.
The result: a policy that lives in practice, not just on paper. Documented compliance with the EU AI Act. And a workforce that is productive with AI instead of dangerous with it.
Related articles
What Is Shadow AI? 5 Risks + How to Manage It (2026)
Shadow AI is unauthorised AI use by employees. Discover why it's dangerous and get a practical framework to manage it effectively.
AI Risk Assessment: Free Template + Scoring Matrix (2026)
Conduct an AI risk assessment with our free template. Scoring matrix, 4 risk categories, and step-by-step methodology aligned with EU AI Act.
AI Bias at Work: 7 Types + How to Detect Them
Algorithmic bias affects hiring, lending and marketing at scale. 7 types of AI bias, real-world cases and EU AI Act requirements explained.