In January 2025, Air Canada was ordered to honour a refund policy that never existed — fabricated by its customer-facing chatbot. The airline argued the chatbot was a “separate legal entity.” The tribunal disagreed. The cost was modest. The reputational damage was not.
This is what happens when AI systems are deployed without proper risk assessment. The technology works — until it does not — and without a structured risk evaluation, nobody knows where the failures will come from or how severe they will be.
AI risk assessment is not optional. Under the EU AI Act, organisations deploying high-risk AI systems must implement a risk management system (Article 9). But even for lower-risk systems, a structured risk assessment is the foundation of responsible AI deployment and effective AI governance.
À retenir
- AI risk assessment is a structured process to identify, evaluate, and mitigate risks across the AI lifecycle
- The EU AI Act mandates risk management systems for high-risk AI — but all AI deployments benefit from assessment
- Risk categories span technical, legal, ethical, operational, and reputational domains
- A scoring matrix combining likelihood and impact enables prioritisation and resource allocation
Why traditional risk frameworks fall short
Most organisations already have risk management frameworks. The problem is that they were designed for predictable, deterministic systems. AI introduces risks that traditional frameworks do not capture:
- Non-determinism — the same input can produce different outputs at different times
- Opacity — many AI models cannot fully explain their reasoning
- Data dependency — AI risk is inseparable from data quality, bias, and provenance
- Emergent behaviour — AI systems can produce unexpected outputs, particularly when interacting with real-world complexity
- Rapid evolution — the risk profile of an AI system changes as models are updated, fine-tuned, or retrained
You do not need to abandon your existing risk framework. But you do need to extend it with AI-specific risk categories and assessment criteria.
44%
of organisations using AI have experienced at least one negative consequence from AI, including reputational harm
Source : IBM Global AI Adoption Index, 2024
The AI risk assessment methodology
Step 1: Inventory and scope
Before you can assess risk, you need to know what you are assessing. Build or update your AI inventory:
- What AI systems are in use? Include third-party tools, embedded AI features in existing software, and internally developed models
- What is each system’s purpose? Document the business function, intended users, and decision scope
- What data does each system process? Classify by sensitivity — personal data, confidential business data, public data
- Who is affected by each system’s outputs? Employees, customers, the public, vulnerable groups
Do not forget shadow AI. Survey teams to discover AI tools in use that IT has not sanctioned. In most organisations, the shadow AI estate is larger than the official one.
Step 2: Classify risk level
The EU AI Act provides a useful four-tier classification:
- Unacceptable risk — banned outright (social scoring, subliminal manipulation, certain biometric uses)
- High risk — AI systems used in employment, education, credit, essential services, law enforcement, migration, or critical infrastructure
- Limited risk — AI systems that interact with people or generate content (transparency obligations)
- Minimal risk — most other AI applications (no specific obligations beyond Article 4 AI literacy)
Classify each AI system in your inventory. Be conservative — if there is ambiguity about whether a system is high-risk, treat it as high-risk until you have evidence otherwise.
Step 3: Identify risks by category
For each AI system, systematically assess risks across five categories:
Technical risks
- Model accuracy and reliability — what is the error rate, and what happens when errors occur?
- Model drift — does performance degrade over time as real-world conditions change?
- Adversarial vulnerability — can the system be manipulated through crafted inputs?
- Integration risks — how does the AI system interact with other systems, and what are the failure modes?
- Scalability — does the system behave differently at scale than in testing?
Legal and regulatory risks
- EU AI Act compliance — does the system meet the obligations for its risk classification?
- GDPR compliance — is personal data processing lawful, with valid legal basis and appropriate safeguards?
- Sector-specific regulation — does the FCA, ICO, or other sector regulator impose additional requirements?
- Contractual obligations — do client contracts restrict AI use or require disclosure?
- Liability — who is responsible if the AI system causes harm?
Ethical risks
- Bias and discrimination — does the system produce different outcomes for different demographic groups?
- Transparency — can users understand how the system reaches its outputs?
- Autonomy — does the system respect human decision-making authority?
- Privacy — does the system collect, infer, or expose information beyond its stated purpose?
Operational risks
- Vendor dependency — what happens if the AI vendor changes terms, increases prices, or ceases operation?
- Skill gaps — do the people operating the system understand its limitations?
- Process disruption — what happens if the AI system fails?
- Shadow AI — are employees using unsanctioned alternatives because the approved system is inadequate?
Reputational risks
- Public trust — would customers or the public object to how this AI system is being used?
- Media exposure — if this system’s operation were reported in the press, would it withstand scrutiny?
- Stakeholder confidence — would investors, regulators, or partners have concerns?
Do not assess risks in isolation. AI risks interact with each other. A technically accurate system can still create reputational risk if it operates opaquely. A legally compliant system can still create operational risk if nobody understands how to use it properly.
Step 4: Score and prioritise
Use a scoring matrix that combines likelihood and impact:
| Low impact (1) | Medium impact (2) | High impact (3) | Critical impact (4) | |
|---|---|---|---|---|
| Very likely (4) | 4 | 8 | 12 | 16 |
| Likely (3) | 3 | 6 | 9 | 12 |
| Possible (2) | 2 | 4 | 6 | 8 |
| Unlikely (1) | 1 | 2 | 3 | 4 |
Impact criteria:
- Low — minor inconvenience, easily corrected, no regulatory or financial consequence
- Medium — noticeable disruption, moderate financial cost, potential regulatory inquiry
- High — significant harm to individuals or the organisation, regulatory action, material financial loss
- Critical — severe harm, regulatory penalties, existential reputational damage
Risk tolerance thresholds:
- 1–3: Accept with monitoring
- 4–6: Mitigate with defined controls
- 8–9: Mitigate urgently, escalate to senior management
- 12–16: Unacceptable — stop or fundamentally redesign the AI system
Step 5: Define mitigations
For each risk above your acceptance threshold, define specific mitigation measures:
- Technical controls — accuracy monitoring, drift detection, adversarial testing, fallback mechanisms
- Process controls — human-in-the-loop review, output verification procedures, approval workflows
- Governance controls — policy requirements, training mandates, audit schedules
- Contractual controls — vendor SLAs, data processing agreements, liability clauses
Document the residual risk after mitigation. If residual risk remains above your tolerance threshold, the AI system should not be deployed.
35%
of high-risk AI systems deployed in the EU will require conformity assessment by August 2026
Source : European Commission AI Act Impact Assessment, 2024
Step 6: Monitor and review
AI risk assessment is not a one-time exercise. Establish ongoing monitoring:
- Continuous monitoring — automated performance tracking, accuracy metrics, anomaly detection
- Periodic review — quarterly reassessment of risk scores and mitigation effectiveness
- Trigger-based review — reassessment following AI incidents, model updates, regulatory changes, or significant changes in use patterns
- Annual comprehensive review — full reassessment aligned with your AI governance review cycle
Connecting to the trustworthy AI framework
AI risk assessment is one component of a broader approach to trustworthy AI. The EU’s seven requirements for trustworthy AI — human agency, robustness, privacy, transparency, fairness, societal wellbeing, and accountability — provide the ethical lens through which risks should be evaluated.
A technically sound risk assessment that ignores fairness or transparency is incomplete. Embed trustworthy AI principles into your risk criteria, not as a separate exercise.
Start with your highest-exposure AI systems — those that process personal data, influence consequential decisions, or are customer-facing. Get the methodology right on these systems, then scale across the portfolio.
Test your AI risk assessment knowledge
Build AI risk awareness across your organisation with Brain
Risk assessment is only as good as the people conducting it. Brain trains your teams to identify, evaluate, and manage AI risks — practical modules covering hallucination detection, data handling, bias awareness, and regulatory obligations. Documented, tracked, and aligned with EU AI Act requirements.
Explore our plans to get started.