In May 2023, a New York lawyer submitted a brief containing six fabricated case citations — all generated by ChatGPT. The judge was unimpressed. The lawyer was sanctioned. The case became the most-cited example of what happens when legal professionals use AI without understanding its limitations.
Two years on, the legal profession has not retreated from AI. It has accelerated towards it. But the lawyers getting it right are those who treat AI as a powerful, unreliable assistant that requires supervision — not a substitute for legal judgement.
À retenir
- AI is being used in contract review, legal research, due diligence, and document management — with measurable productivity gains
- The SRA expects solicitors to understand the AI tools they use and remain responsible for their outputs
- Hallucination risk in legal AI is a professional conduct issue, not just an accuracy problem
- The Law Society recommends a risk-based approach: assess each use case individually, do not adopt blanket bans
Where AI is delivering value in legal practice
Contract review and analysis
Contract review is the most mature and widely adopted AI use case in legal. The productivity gains are substantial and well-documented.
Automated clause extraction. AI tools like Kira Systems, Luminance, and iManage Extract identify and extract key clauses — change of control, indemnity, limitation of liability, termination, assignment — from large volumes of contracts in minutes rather than days.
Risk identification. AI flags unusual, missing, or non-standard clauses against a baseline of expected terms. This is particularly valuable in M&A due diligence, where legal teams must review hundreds or thousands of contracts under time pressure.
Comparison and benchmarking. AI compares contract terms against precedent banks, market standards, and regulatory requirements, highlighting deviations that require human attention.
Thomson Reuters reports that AI-assisted contract review reduces review time by 60-80% for routine contracts, while improving consistency and reducing the risk of missed clauses.
60-80%
reduction in contract review time when AI tools assist with clause extraction and risk identification
Source : Thomson Reuters Legal Technology Report, 2025
Legal research
Legal research was the first area where AI tools showed clear promise — and also where the risks became most visible.
Case law analysis. AI tools search and analyse case law more comprehensively than traditional keyword searches, identifying relevant precedents across jurisdictions. Tools like vLex Vincent AI, CaseText (now part of Thomson Reuters), and Lexis+ AI go beyond simple search to provide analysis and summaries.
Regulatory monitoring. AI tracks regulatory changes across jurisdictions and alerts legal teams to developments relevant to their clients or organisation. For firms advising on EU AI Act compliance, this is increasingly essential.
Legislative analysis. AI summarises complex legislation, identifies key obligations, and maps them to organisational activities. This is valuable for in-house teams managing compliance across multiple regulatory frameworks.
Every legal AI research tool carries hallucination risk. AI can and does fabricate case citations, misstate holdings, and conflate different legal principles. Human verification of every AI-generated legal research output is a professional obligation, not a best practice.
Due diligence
AI has transformed the economics and speed of due diligence, particularly in M&A transactions.
Document review. AI categorises, prioritises, and extracts information from thousands of documents in virtual data rooms. What once required armies of junior lawyers working around the clock can now be completed in a fraction of the time.
Risk flagging. AI identifies potential issues — litigation exposure, regulatory non-compliance, material contract risks, intellectual property concerns — and escalates them for human review.
Reporting. AI generates structured due diligence reports from reviewed materials, reducing the time between review completion and client delivery.
Allen & Overy (now A&O Shearman) was among the first Magic Circle firms to deploy AI at scale through its Harvey AI platform. Clifford Chance, Linklaters, and Freshfields have followed with their own implementations.
Document management and knowledge
Precedent retrieval. AI searches firm knowledge bases to surface relevant precedents, templates, and prior advice. This addresses one of the oldest problems in legal practice: the partner who has done this transaction before but cannot find the documents.
Document drafting assistance. AI generates first drafts of routine documents — NDAs, board minutes, standard correspondence — saving time on work that adds limited value but still requires legal accuracy.
The regulatory and professional landscape
SRA guidance
The Solicitors Regulation Authority (SRA) has not banned or restricted AI use. Its position, set out in guidance updated in 2025, is clear:
- Competence. Solicitors must understand the AI tools they use well enough to supervise them effectively (SRA Competence Statement, paragraph 2).
- Responsibility. The solicitor remains personally responsible for the accuracy and quality of work, regardless of whether AI was involved in producing it.
- Client confidentiality. Solicitors must ensure that AI tools do not compromise client confidentiality. This means understanding where data goes, how it is processed, and whether it is retained.
- Supervision. AI outputs must be supervised to the same standard as work delegated to a junior lawyer.
78%
of UK law firms report using or piloting AI tools, but only 31% have a formal AI governance policy
Source : Law Society Technology & AI Survey, 2025
Law Society position
The Law Society of England and Wales has taken a pragmatic position. Its 2025 report on AI in legal practice recommends:
- A risk-based approach to AI adoption — not blanket bans or blanket adoption
- Training for all legal professionals who use AI tools
- Governance frameworks that address AI-specific risks alongside existing information governance
- Transparency with clients about AI usage where material
The report explicitly acknowledges that firms that fail to adopt AI risk becoming uncompetitive — and that responsible adoption is both a professional and commercial imperative.
EU AI Act implications
For UK firms advising EU clients or operating in the EU, the EU AI Act creates additional obligations. AI systems used in the administration of justice or legal interpretation are classified as high-risk, requiring compliance documentation, risk management, and human oversight.
Even UK-only firms should understand the Act’s implications — many expect UK regulation to follow a similar direction. Our guide to AI governance frameworks covers the broader compliance landscape.
Risks that legal teams must address
Hallucinations and accuracy
The defining risk of AI in legal practice. Large language models do not “know” the law. They predict likely text sequences. This means they can — and routinely do — fabricate case citations, misstate legal principles, and produce plausible-sounding but entirely wrong analysis.
The professional consequences are severe. The New York sanctions were only the beginning. Firms must establish mandatory verification protocols for all AI-generated legal content.
Client confidentiality
Submitting client information to AI tools raises immediate confidentiality concerns. Questions legal teams must answer:
- Is client data sent to external servers?
- Is it used to train models?
- Who at the AI provider can access it?
- Is it stored, and for how long?
- Does this comply with retainer terms and professional obligations?
Enterprise AI deployments with data isolation (such as Azure OpenAI Service or private instances) address some of these concerns but require careful data privacy assessment.
The SRA has been clear: using a free-tier AI tool to process client data is likely a breach of professional obligations. Enterprise-grade tools with appropriate data protection guarantees are the minimum standard.
Bias in legal AI
AI tools trained on historical legal data may embed existing biases — in sentencing patterns, risk assessments, or outcome predictions. For legal teams using AI in advisory or decision-support roles, bias awareness and monitoring are essential.
Professional indemnity insurance
Firms should confirm with their PI insurers that AI-assisted work is covered under existing policies. Some insurers are beginning to ask about AI governance practices as part of renewal assessments.
Getting started responsibly
- Audit current usage. Map where AI is already being used — formally and informally. Shadow AI is prevalent in legal teams.
- Establish an AI policy. Define approved tools, acceptable use cases, supervision requirements, and data handling rules. See our AI policy template guide.
- Train your people. Every lawyer and support professional using AI needs training on capabilities, limitations, and professional obligations.
- Start with low-risk use cases. Document review, research support, and drafting assistance — with mandatory human verification.
- Build governance gradually. Develop your AI governance framework in line with the complexity of your AI adoption.
Build AI-competent legal teams with Brain
Brain is the AI training platform that helps legal teams develop the AI competency the SRA expects and the market demands. Role-specific modules covering AI fundamentals, hallucination awareness, data protection, and professional conduct obligations — with completion tracking for compliance documentation.
Whether you are preparing your firm for AI governance requirements or building practical AI skills across the team, Brain gets your people ready. Explore our plans to get started.
Related articles
AI for Finance: FP&A, Audit & Compliance Guide 2026
Improve forecasting and audit quality with AI. Covers FP&A automation, fraud detection, compliance monitoring, with FCA references and Big 4 examples.
AI for CFOs: Budget, Forecast & Govern AI Investments
Quantify AI ROI, manage risk, and report to the board with confidence. A strategic AI guide for finance leaders and CFOs.
AI for HR Teams: Recruitment & L&D Guide (UK, 2026)
Hire smarter and personalise learning with AI. Covers recruitment, employee engagement, L&D, and workforce planning with UK examples and CIPD research.