The Ownership Question That Nobody Has Definitively Answered
Who owns the output of an AI system? This question has generated more legal uncertainty per dollar of economic activity than perhaps any other intellectual property issue in history. In 2026, the answer remains frustratingly nuanced: it depends on the jurisdiction, the degree of human involvement, the type of output, and increasingly, the specific AI system used. But the contours of the legal landscape are becoming clearer, and businesses that fail to understand them are operating with unquantified IP risk.
The economic stakes are staggering. Generative AI tools produced an estimated 12 billion pieces of content in 2025 — images, text, code, music, video. The market value of AI-generated content exceeded $47 billion. Yet the ownership status of much of this output remains legally ambiguous. Companies are building products, marketing campaigns, and even core business processes on AI-generated content without clear title to the underlying intellectual property. This is the equivalent of building a skyscraper on land with a disputed deed.
The U.S. Copyright Office Position: Human Authorship Required
The Thaler v. Perlmutter Legacy
The foundational U.S. case remains Thaler v. Perlmutter, where the D.C. District Court upheld the Copyright Office's refusal to register a work created autonomously by an AI system. The court's reasoning was straightforward: copyright requires human authorship, and an AI system cannot be an author under the Copyright Act. This holding has been consistently applied and effectively settled the question of fully autonomous AI output — it is not copyrightable in the United States.
But Thaler addressed the easy case. The harder question — and the one dominating 2026 litigation — involves AI-assisted creation where a human provides substantial creative direction. The Copyright Office's February 2023 guidance on the Zarya of the Dawn registration established that AI-generated elements within a larger human-authored work are not individually copyrightable, but the human-authored selection, arrangement, and modification of those elements can be. This framework has been applied in over 200 registration decisions since, creating a body of administrative precedent that is remarkably consistent but difficult to apply in practice.
The Prompt Engineering Debate
The most contentious question in U.S. IP law right now is whether detailed prompt engineering constitutes sufficient human authorship to support copyright. The Copyright Office has consistently held that text prompts to an AI image generator do not constitute authorship of the resulting image, analogizing prompts to instructions given to a commissioned artist — the person giving instructions is not the author of the resulting work. However, several pending cases argue that highly specific, iterative prompt engineering involving dozens of refinement cycles constitutes a form of creative expression that should be recognized.
The practical reality in early 2026 is that pure AI-generated content receives no copyright protection in the U.S. Content that is substantially modified, curated, or arranged by humans after AI generation can receive protection for the human-contributed elements. The boundary between these categories is being litigated in real time across at least 15 active federal cases.
The European Approach: Diverging From the U.S.
The EU has taken a different path that may ultimately prove more commercially practical. Several EU member states are developing frameworks that recognize a form of related rights — not full copyright, but limited protection — for AI-generated content where a human or entity has made a substantial investment in the generation process. This approach mirrors the existing sui generis database right, which protects investment in data compilation regardless of the creativity involved.
The EU AI Act's transparency requirements add another dimension. AI-generated content must be labeled as such under the Act, creating a practical disclosure obligation that interacts with IP protection in complex ways. If you must disclose that content is AI-generated, and AI-generated content receives reduced or no IP protection, the disclosure obligation effectively limits your ability to protect that content. This interaction is likely intentional — European regulators are signaling that AI-generated content should be treated differently from human-created content in both regulatory and IP frameworks.
The Training Data Question: Input Versus Output Rights
Parallel to the output ownership question is the input rights question: can AI companies use copyrighted works to train their models? This question is being litigated in several landmark cases, including the New York Times v. OpenAI action and the consolidated artist class actions against Stability AI, Midjourney, and others. The outcomes of these cases will reshape the economics of the entire AI industry.
The fair use defense is the central legal battleground. AI companies argue that training on copyrighted works is transformative use — the model learns patterns and relationships, not copying specific works. Copyright holders argue that AI models are sophisticated compression and recombination engines that can reproduce substantial portions of training data, making the use non-transformative. Early judicial signals are mixed, with different courts emphasizing different factors of the fair use analysis.
The economic implications are binary and massive. If training on copyrighted data is fair use, the current AI development paradigm continues largely unchanged. If it is not, AI companies face potential liability in the hundreds of billions and must either license training data at scale or develop models trained exclusively on licensed or public domain data. Several companies, including Adobe and Getty, have already built models trained on fully licensed datasets as a hedge against this uncertainty.
🔒 Protect Your Digital Life: NordVPN
When researching AI IP law or accessing sensitive legal databases, protect your research trail. A VPN prevents your ISP from logging which legal resources and case databases you access — essential for maintaining client confidentiality.
Practical Guidance for Businesses Using Generative AI
First, do not assume you own AI-generated content. Until the legal landscape stabilizes, treat AI output as having uncertain IP status. This means not relying on AI-generated content as the sole basis for competitive differentiation that depends on IP exclusivity. Second, document human involvement meticulously. The more you can demonstrate human creative direction, selection, arrangement, and modification, the stronger your copyright claim for the overall work. Third, review the terms of service for every AI tool you use. Some platforms claim rights to outputs generated using their systems, others explicitly disclaim any rights, and the variations matter enormously.
Fourth, consider the IP implications of your AI tool choices. Using models trained on potentially infringing data creates downstream liability risk. Tools built on licensed training data — even if less capable — may be the safer choice for commercial applications. Fifth, build your content strategy assuming some AI-generated elements may not be protectable. Layer human creativity on top of AI foundations so the protectable human elements provide competitive value even if the AI-generated base layer enters the public domain.
Where This Is Heading
The IP framework for AI output is being built in real time through litigation, administrative decisions, and legislative proposals. By the end of 2026, we expect at least two significant federal court decisions that will clarify the prompt engineering authorship question and the training data fair use question. Businesses should monitor these cases closely and be prepared to adjust their AI content strategies based on the outcomes. The companies that navigate this uncertainty successfully will be those that maintain flexibility while building human creative value on top of AI capabilities.
