In February 2023, the US Copyright Office ruled that images created by the AI tool Midjourney could not be copyrighted because they were not the product of human authorship. In August 2023, a court in China reached the opposite conclusion, granting copyright to an AI-generated image on the basis that the user’s creative choices in prompting constituted authorship. In December 2024, a UK tribunal began hearing arguments about whether AI training on copyrighted works constitutes fair dealing.
Welcome to the most uncertain area of intellectual property law in a generation. For businesses using AI to generate content, make decisions, or develop products, the copyright landscape is a minefield of unresolved questions, conflicting jurisdictions, and active litigation.
This article does not pretend those questions are settled. What it does is explain the current state of play, identify the risks, and provide practical guidelines for navigating AI copyright issues in your organisation.
À retenir
- AI-generated content is not clearly protectable by copyright in most jurisdictions — the law is unsettled
- Training AI on copyrighted works is the subject of major ongoing litigation worldwide
- The EU AI Act requires transparency about copyrighted training data for general-purpose AI models
- Businesses should implement clear IP policies covering AI inputs, outputs, and disclosure
The ownership question: who owns AI outputs?
The fundamental copyright question is deceptively simple: if an AI system generates a piece of text, an image, code, or a design — who owns it?
Current legal positions by jurisdiction:
United Kingdom. UK copyright law (Copyright, Designs and Patents Act 1988, Section 9(3)) is unusual in that it provides for copyright in “computer-generated” works. The author is deemed to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” This potentially covers AI-generated outputs, but the provision was drafted in the 1980s for much simpler computer programmes. How it applies to generative AI is untested in UK courts and the subject of active legal debate.
European Union. EU copyright law generally requires human intellectual creation. The Court of Justice of the European Union (CJEU) has consistently held that copyright requires “the author’s own intellectual creation” — the expression of free and creative choices. Purely AI-generated content without significant human creative input is unlikely to qualify.
United States. The US Copyright Office has issued guidance (2023) stating that works generated by AI without human authorship are not copyrightable. However, works that combine AI-generated elements with sufficient human creative expression may qualify for partial protection. The key question is the degree of human creative control.
$2bn+
estimated value of AI copyright litigation currently pending in US courts alone
Source : Stanford HAI AI Index Report, 2025
The training data question: can AI learn from copyrighted works?
This is where the money is. Multiple major lawsuits are testing whether AI companies can train models on copyrighted works without permission or payment:
Key cases:
- The New York Times v. OpenAI and Microsoft (filed December 2023) — the Times alleges that ChatGPT was trained on millions of its copyrighted articles and can reproduce them substantially. OpenAI argues fair use.
- Getty Images v. Stability AI (filed 2023, UK and US) — Getty alleges Stable Diffusion was trained on 12 million copyrighted images without licence or consent.
- Authors Guild v. OpenAI (filed 2023) — representing thousands of authors whose books were allegedly used as training data.
- Concord Music v. Anthropic (filed 2024) — music publishers alleging AI models reproduce copyrighted lyrics.
None of these cases has produced a final ruling. The outcomes will shape AI copyright law for decades.
The legal arguments:
For AI companies: Training is transformative fair use (US) or fair dealing (UK). AI models learn patterns and principles from training data, similar to how humans learn from reading. The outputs are new works, not copies.
For rights holders: Training involves making copies of copyrighted works, which requires permission. AI outputs that compete with the original works cause economic harm. The scale of copying — billions of works — exceeds any reasonable interpretation of fair use.
Do not assume that because these cases are unresolved, there is no risk. Using AI tools that were trained on copyrighted data exposes your organisation to potential claims — particularly if AI outputs closely resemble existing copyrighted works. The risk is real and present, even without legal certainty.
What the EU AI Act says about copyright
The EU AI Act addresses AI copyright primarily through obligations on providers of general-purpose AI (GPAI) models:
- Article 53 requires GPAI providers to draw up and make publicly available a “sufficiently detailed summary about the content used for training,” including information about copyrighted training data
- Copyright opt-out: The EU’s Digital Single Market Directive (2019/790) permits text and data mining (TDM) of copyrighted works for research purposes and commercial purposes — unless the rights holder has explicitly opted out. The AI Act reinforces this mechanism for AI training
For UK organisations, the copyright landscape is less codified. The UK’s Intellectual Property Office (IPO) consulted on an expanded TDM exception for AI training in 2022 but shelved the proposal after significant opposition from creative industries. The current legal position in the UK remains governed by the existing (narrow) fair dealing exceptions.
Practical risks for businesses
Risk 1: your AI outputs may not be protectable
If you use AI to generate marketing copy, design assets, code, or reports, that content may not be protectable by copyright. This means:
- Competitors could freely copy AI-generated content you publish
- You cannot enforce intellectual property rights over purely AI-generated outputs
- The commercial value of AI-generated content as an asset is uncertain
Mitigation: Ensure significant human creative input in any AI-assisted work that you intend to protect as intellectual property. Document the human contribution.
Risk 2: AI outputs may infringe existing copyrights
Generative AI can produce outputs that are substantially similar to copyrighted works in its training data. If your organisation publishes or commercialises such outputs, you could face infringement claims.
Mitigation: Review AI-generated content for similarity to known copyrighted works. Implement an AI output review process, particularly for visual content, marketing copy, and code.
Risk 3: your input data may create liability
If you fine-tune AI models or use retrieval-augmented generation (RAG) with copyrighted material, you may be creating derivative works or enabling copyright infringement.
Mitigation: Use only properly licensed data for fine-tuning and RAG. Maintain clear documentation of data sources and licences.
Risk 4: contractual exposure
Client contracts, licensing agreements, and terms of service may restrict the use of AI-generated content or require disclosure. Breach of these obligations can create liability even where copyright law itself is unclear.
Mitigation: Review existing contracts for AI-related obligations. Update standard terms to address AI use.
61%
of organisations have no clear policy on intellectual property rights for AI-generated content
Source : WIPO AI and IP Survey, 2025
Practical guidelines for your organisation
1. Establish an AI IP policy
Your AI policy should include explicit provisions on intellectual property:
- Define how AI-generated content is classified (fully AI-generated, AI-assisted with human input, human-created with AI tools)
- Specify IP ownership for each category
- Require documentation of human creative contribution for content intended to be IP-protected
- Restrict the use of copyrighted third-party material as AI inputs without legal review
2. Document the human contribution
For any AI-assisted work that may need copyright protection:
- Record the prompts, creative direction, and editorial decisions made by the human creator
- Document revisions, selections, and modifications made to AI outputs
- Maintain version history showing the evolution from AI output to final work
- Ensure the human contribution is substantial and creative, not merely mechanical
3. Audit your AI tools
Understand the copyright implications of the AI tools you use:
- What was the model trained on? Review the provider’s training data disclosures (required under EU AI Act Article 53 for GPAI models)
- What do the terms of service say about IP ownership of outputs?
- Does the provider offer indemnification for copyright infringement claims?
- What data do you input into the tool, and what rights does the provider claim over it?
4. Implement output review processes
Before publishing or commercialising AI-generated content:
- Check for substantial similarity to known copyrighted works
- Use plagiarism detection tools (noting their limitations with AI-generated content)
- Review visual content for watermarks, style replication, or recognisable elements from copyrighted sources
- Maintain records of the review process
AI copyright law will continue to evolve rapidly. Build flexibility into your IP policies and review them at least twice yearly. What is acceptable practice today may become legally problematic tomorrow — and vice versa.
5. Disclose AI use appropriately
Transparency about AI use builds trust and reduces legal risk:
- Disclose AI use in content creation where required by regulation, contract, or professional standards
- Maintain internal records of all AI-assisted work products
- Be prepared to demonstrate the degree of human involvement if challenged
The governance connection
AI copyright risk is one dimension of broader AI governance. Your governance framework should integrate IP considerations into:
- AI procurement decisions (evaluating tools for IP risk)
- AI deployment approvals (assessing IP implications of specific use cases)
- Training programmes (ensuring employees understand IP risks when using AI)
- Incident response (handling potential infringement claims)
Navigate AI copyright with confidence
The legal landscape is uncertain, but inaction is not an option. Brain trains your teams to use AI responsibly — including understanding copyright risks, data handling obligations, and output verification. Practical modules tailored to your organisation’s AI use cases, with documented completion for compliance purposes.
Explore our plans to get started.
Related articles
AI Risk Assessment: Free Template + Scoring Matrix (2026)
Conduct an AI risk assessment with our free template. Scoring matrix, 4 risk categories, and step-by-step methodology aligned with EU AI Act.
Shadow AI Policy Template: 8 Sections You Need (2026)
Download a practical shadow AI policy template with 8 essential sections to enforce compliance and protect your organisation from uncontrolled AI.
AI Bias at Work: 7 Types + How to Detect Them
Algorithmic bias affects hiring, lending and marketing at scale. 7 types of AI bias, real-world cases and EU AI Act requirements explained.