In March 2025, a US federal court ruled that AI-generated artwork cannot receive copyright protection without meaningful human authorship. Six months later, the European Patent Office confirmed that AI systems cannot be named as inventors on patent applications. Meanwhile, billions of dollars in training data lawsuits remain unresolved across multiple jurisdictions.
For businesses that use AI daily — to draft reports, generate images, write code, or summarise documents — these rulings are not abstract legal theory. They determine whether your AI outputs are legally protectable, whether your AI inputs create liability, and whether your employees understand the difference.
This guide covers what you need to know about AI copyright and intellectual property, and what you need to do about it.
À retenir
- AI-generated content without significant human input is unlikely to receive copyright protection in most jurisdictions
- Using copyrighted material as AI input (prompts, fine-tuning, RAG) can create infringement liability for your organisation
- Employee-created AI outputs need clear IP ownership rules in employment contracts and AI policies
- The EU AI Act introduces specific transparency obligations around copyrighted training data
- Practical policies and documentation processes reduce legal exposure even while the law remains unsettled
Who owns AI-generated content?
The central question in AI copyright is straightforward: when an AI system produces text, an image, code, or a design, who holds the intellectual property rights?
The answer depends on jurisdiction, the degree of human involvement, and the specific tool used — and in many cases, the answer is simply “nobody.”
United States. The US Copyright Office has maintained since 2023 that works produced by AI without human authorship are not copyrightable. Works combining AI-generated elements with substantial human creative expression may qualify for partial protection. The key test is whether a human made creative choices that shaped the final output.
European Union. EU copyright law requires “the author’s own intellectual creation” — the expression of free and creative choices by a natural person. Purely AI-generated content almost certainly falls outside this definition. The CJEU has been consistent: no human author, no copyright.
United Kingdom. UK law is unusual. Section 9(3) of the Copyright, Designs and Patents Act 1988 provides for copyright in “computer-generated” works, with authorship attributed to “the person by whom the arrangements necessary for the creation of the work are undertaken.” This provision — drafted decades before generative AI — potentially extends copyright protection to AI outputs. However, it has not been tested in court for modern generative AI, and its scope remains genuinely uncertain.
78%
of in-house legal teams say their organisation lacks clear IP guidelines for AI-generated work products
Source : ACC Chief Legal Officers Survey, 2025
What this means in practice
If your marketing team generates campaign visuals with AI, those assets may be freely copied by competitors. If your engineering team generates code with an AI assistant, the copyright status of that code is unclear. If your content team publishes AI-drafted articles, you may not be able to enforce exclusivity.
The practical response is not to stop using AI. It is to ensure that humans contribute meaningful creative input to any work product your organisation intends to protect — and to document that contribution.
Training data: the billion-dollar question
The largest AI copyright disputes are not about who owns AI outputs. They are about whether AI companies had the right to train on copyrighted works in the first place.
Major active cases include The New York Times v. OpenAI, Getty Images v. Stability AI, and the Authors Guild class action. None has produced a final ruling. The legal arguments on each side are substantive:
The fair use argument (AI companies). Training is transformative — the model learns patterns and statistical relationships, not specific works. The outputs compete in different markets from the training data. This is analogous to how humans learn by reading.
The infringement argument (rights holders). Training involves making unauthorised copies of copyrighted works at massive scale. AI outputs can substitute for and compete with original works, causing economic harm. The commercial purpose undermines any fair use claim.
For UK organisations, the fair dealing defence is narrower than US fair use. The UK Intellectual Property Office proposed an expanded text and data mining exception for AI in 2022 but withdrew it after creative industry opposition. The current position offers limited comfort to AI developers or users.
Your organisation does not need to be an AI developer to face training data liability. If you fine-tune models on copyrighted content, build RAG systems using proprietary data, or use AI tools whose training data provenance is uncertain, you carry exposure. Review your AI risk assessment accordingly.
The EU AI Act and copyright transparency
The EU AI Act introduces specific obligations relevant to AI copyright:
- Article 53 requires providers of general-purpose AI models to publish a sufficiently detailed summary of training data content, including information about copyrighted material used
- Copyright opt-out mechanism: The Digital Single Market Directive (2019/790) permits text and data mining unless rights holders explicitly opt out. The AI Act reinforces this — if a rights holder has reserved their rights, AI providers must respect that reservation
- Downstream obligations: Deployers of AI systems must understand what their tools were trained on, particularly when outputs may intersect with protected content
For UK businesses, the AI regulation landscape is less prescriptive but evolving. The UK government’s pro-innovation approach does not eliminate copyright risk — it simply means the rules are less codified and more dependent on existing case law.
Employee IP and AI: the overlooked risk
Most employment contracts and IP assignment clauses were drafted before generative AI. They typically assign to the employer all intellectual property created by the employee “in the course of employment.” But these clauses may not adequately cover scenarios where:
- An employee uses a personal AI account for work tasks — who owns the output?
- An AI tool’s terms of service claim rights over user inputs or outputs
- The AI-generated work cannot receive copyright protection, making the assignment clause meaningless
- An employee inadvertently inputs confidential company data into an external AI tool
Updating your employment framework
Your AI policy and employment contracts should address:
- Approved tools only. Employees must use sanctioned AI tools with reviewed terms of service. This reduces shadow AI risk.
- Input restrictions. Confidential data, client information, and proprietary content must not be entered into external AI tools without authorisation. Your data privacy framework should govern this.
- Output classification. Define categories — fully AI-generated, AI-assisted with human input, human-created with AI tools — and assign ownership and protection status to each.
- Documentation requirements. For work intended to be IP-protected, employees must document their creative contribution: prompts, editorial decisions, revisions, and selection choices.
$2bn+
in pending AI copyright litigation in US federal courts alone
Source : Stanford HAI AI Index Report, 2025
Fair use and fair dealing: what businesses get wrong
Many organisations assume that internal use of AI tools is inherently low-risk because outputs are not published externally. This is incorrect for several reasons:
- Internal reports and analyses generated by AI may incorporate copyrighted material from training data, creating liability even if never published
- Client deliverables generated with AI assistance may breach contractual obligations requiring original work
- Code generated by AI may replicate copyrighted code from training data, potentially violating open-source licences or proprietary terms
- AI-generated content used in pitches or proposals may not be protectable, meaning competitors who see it can reuse it freely
Fair use (US) and fair dealing (UK) are defences, not permissions. They are fact-specific, unpredictable, and expensive to litigate. Do not build your AI strategy around the assumption that your use qualifies.
Build a simple decision tree for your teams: Is the AI output going to be published? Shared with clients? Used in a product? Protected as IP? Each “yes” triggers a review step. Train your employees on this process — it takes 15 minutes and prevents months of legal headaches.
Building a practical AI copyright policy
A robust approach to AI copyright does not require waiting for the law to settle. It requires clear internal processes that reduce risk regardless of how courts ultimately rule.
Step 1: Audit your current AI use
Map which teams use AI, which tools they use, and what outputs they generate. Your AI governance framework should capture this. Pay particular attention to:
- Content creation (marketing, communications, social media)
- Software development (code generation, testing)
- Client-facing deliverables (reports, presentations, analysis)
- Internal documentation and knowledge management
Step 2: Review AI tool terms of service
Not all AI tools treat IP the same way. Some grant users full ownership of outputs. Others retain broad licences. Some disclaim all liability for infringement. Review and compare the terms for every AI tool your organisation uses.
Step 3: Establish output review processes
Before publishing or commercialising AI-generated content, implement a review step. Check for similarity to known copyrighted works. Use plagiarism detection tools. For code, run licence compliance checks. Document the review.
Step 4: Train your teams
Copyright literacy is a core AI competency. Your employees do not need to become lawyers, but they do need to understand:
- Why AI outputs may not be copyright-protected
- What data they can and cannot use as AI inputs
- When to flag potential IP issues
- How to document human creative contribution
Step 5: Monitor and adapt
AI copyright law is evolving rapidly. Major court rulings, regulatory guidance, and new legislation are expected throughout 2026 and 2027. Assign responsibility for monitoring developments and updating your policies at least twice yearly. Your AI readiness assessment should include IP preparedness as a dimension.
The bottom line
AI copyright is not a problem you can solve once and forget. It is an ongoing governance responsibility that sits at the intersection of legal, operational, and strategic risk. The organisations that manage it well will be those that:
- Accept the uncertainty rather than ignoring it
- Build flexible policies that work regardless of how the law develops
- Train their people to make informed decisions about AI inputs and outputs
- Document everything
Brain helps organisations build AI-ready teams that understand not just how to use AI tools, but how to use them responsibly — including copyright, data handling, and compliance. Practical training modules tailored to your sector and risk profile.
Related articles
AI Governance Framework: EU AI Act + NIST Guide
Build an AI governance framework that meets EU AI Act and NIST AI RMF requirements. Step-by-step implementation for organisations of all sizes.
AI Governance Framework: Checklist + Template (ISO 42001)
Build an AI governance framework step by step. Includes checklist, template, EU AI Act alignment and ISO 42001 integration guide.
AI Regulation UK: DSIT vs EU AI Act (2026)
UK AI regulation explained — DSIT framework, FCA/ICO/Ofcom roles, and how it compares to the EU AI Act. What UK businesses must do now.