Introduction – AI Creativity Boom
Generative artificial intelligence has ushered in a creativity boom in recent years. Advanced AI systems are now producing art, music, and literature that rival human creations, blurring the lines of authorship. For example, an image created with the Midjourney AI tool even won a state art competition in 2022, demonstrating how AI-generated art can achieve human-level recognition. These breakthroughs have unleashed a wave of IP disputes as of 2025. Courts and policymakers worldwide are grappling with fundamental questions: Can AI-created works be protected by copyright? Who (if anyone) owns an AI’s inventions or branding? The capabilities of today’s generative AI models “raise significant questions about the nature and scope of human authorship”, sparking heated legal debates. As AI-generated content proliferates, a series of high-profile legal battles – spanning copyright, patents, and trademarks – is unfolding.
Copyright Law and AI-Generated Works
One of the thorniest issues is who owns the copyright (if any) in AI-generated works. Traditional copyright law requires a human author to claim originality and creative input. In the United States, the U.S. Copyright Office has taken a firm stance that purely AI-generated material lacks the human authorship needed for protection. In a notable 2023 decision, the Office rescinded the registration for a graphic novel’s AI-created images, emphasizing that the images were “not the product of human authorship”. Similarly, it refused to register an award-winning Midjourney-generated artwork on the same grounds. The U.S. Copyright Office’s recent guidance, published in early 2025, reaffirms that human creativity remains essential for copyright – works containing AI-generated content are only copyrightable if a human made sufficient creative contributions or arrangements in the final work.
Other jurisdictions are also wrestling with AI authorship. In the United Kingdom, copyright law traditionally recognizes a “computer-generated work” (with no human author) by assigning authorship to the person who undertook the arrangements for creation. However, the UK is reconsidering this approach in light of generative AI. The UK Government launched a consultation in December 2024 on “Copyright and Artificial Intelligence,” seeking to clarify how AI-generated content should be handled. Early indications suggest the UK may tighten requirements for human input, aligning more with the U.S. view that AI authorship per se is not protectable. The European Union’s stance similarly holds that copyright protects only human-created expressions. No EU law today recognizes an algorithm as the “author” of a work – any content autonomously produced by AI falls outside traditional copyright. Policy reviews in the EU have floated ideas ranging from doing nothing (relying on existing human-authorship doctrines) to creating new sui generis rights for AI outputs, but so far no European legislation grants copyright to AI-generated works. In practice, the emerging consensus in 2025 is that if a work lacks a human creative spark, it lacks copyright. As the U.S. Copyright Office put it: you can’t copyright a fully AI-generated work, but you may be able to protect your “sufficiently original selection and arrangement or modification” of AI material.
Patent Inventorship in the Age of AI
Another precedent-setting battle has played out in patent law: can an AI be listed as an inventor on a patent application? Inventorship has long been a human domain, but Dr. Stephen Thaler tested its limits by naming his AI system “DABUS” as the sole inventor for two novel products. In the landmark Thaler v. USPTO case, courts resoundingly ruled that an AI cannot be a patent inventor under current law. The U.S. Court of Appeals for the Federal Circuit held in 2022 that U.S. patent statutes “unambiguously require” a human inventor, rejecting the DABUS applications since an AI is “not a person” under the Patent Act. In April 2023, the U.S. Supreme Court declined to hear Thaler’s appeal, letting stand the principle that only human beings qualify as inventors Courts in other jurisdictions have reached the same conclusion. The UK Court of Appeal – and most recently the UK Supreme Court in 2023 – likewise found that AI cannot hold inventorship, affirming that patents must be attributed to human creators. (South Africa briefly granted a patent naming DABUS as inventor, but this was an outlier involving a non-examined registration system.)
The implications for innovation are hotly debated. Thaler and supporting academics argue that disallowing AI inventors “jeopardizes billions in… investments” and could stifle technological progress. They note AI is increasingly used to autonomously generate solutions in fields from medicine to energy, and that rejecting AI-generated patents “curtails our patent system’s ability… to optimally stimulate innovation”. On the other hand, policymakers reason that the patent system’s incentive – a temporary monopoly in exchange for public disclosure – is meant for human inventors. If an AI truly works without any creative human input, some suggest that the person who deployed or trained the AI should be listed as the inventor (as a proxy), ensuring a responsible party holds the rights. For now, patent offices globally are sticking to a clear rule: AI can assist invention, but cannot be an inventor. This maintains a human-centric patent regime. The status quo preserves the traditional framework, at least until lawmakers decide whether new accommodations are needed for AI-driven innovation.
Trademarks and AI Branding
AI’s reach has extended to branding and marketing, introducing novel issues in trademark law. Companies can use generative AI to create logos, names, or slogans – but who is the creator, and could these AI-generated brands cause consumer confusion? Unlike copyright and patents, trademark ownership is tied to use in commerce rather than authorship, so an AI-generated mark can be registered by a human or company as owner. However, the challenges lie in ensuring these AI-devised marks are distinctive and do not infringe existing trademarks. AI branding tools might inadvertently produce a logo or product name that closely resembles a famous mark, especially since AI systems train on vast datasets of existing branding. As one analysis noted, AI systems can create logos or brand names that inadvertently resemble existing trademarks, leading to legal disputes. For example, if an AI image generator unintentionally includes a well-known brand logo in a synthesized design, the result could constitute trademark infringement or dilution. To mitigate these risks, human oversight in brand creation is crucial. Companies are advised to have human experts vet any AI-generated names or logos to catch potential conflicts. Trademark law still requires that a mark be non-confusing and distinctive, criteria that an algorithm might not reliably evaluate. There have already been instances of AI-generated product packaging and characters including trademarked elements without authorization. These scenarios underscore that while AI can speed up creative design, it lacks the legal judgment to navigate trademark nuances. Moving forward, we may see guidance or best practices (if not new regulations) emphasizing that a human should always supervise AI branding efforts – both to ensure the output can be protected and to avoid inadvertently violating others’ rights. In sum, AI-generated trademarks are permissible, but only with careful human curation to prevent consumer confusion and maintain the integrity of brand identities.
Case Studies in AI & IP
Several high-profile disputes illustrate how courts are tackling AI’s use of copyrighted data and the outputs of generative models. Two landmark battles in 2023–2025 involve the visual arts: Getty Images v. Stability AI and a class-action lawsuit by artists against AI image generators. In the Getty Images case, the stock photo giant sued Stability AI (creator of the Stable Diffusion image generator) for allegedly scraping over 12 million Getty photos without permission to train its AI. Getty argues this mass ingestion of copyrighted photographs (and even Getty’s watermark logo) infringes both copyright and trademark rights, and it seeks enormous damages for the unlicensed use. In early 2025, a UK High Court decision in the case refused to allow Getty’s lawsuit to proceed as a broad “representative action” on behalf of other photographers, signaling that such AI copyright claims may need to be handled in individualized suits. Nonetheless, the core allegations of infringement (including violation of a sui generis database right in Getty’s curated photo database) remain live in the UK litigation. A parallel Getty v. Stability AI lawsuit is also unfolding in the U.S. as of 2025, making this one of the first major legal tests of AI training data practices on both sides of the Atlantic.
Separately, groups of individual artists have taken on AI firms for using their works in training data without consent. In a notable U.S. case, Andersen v. Stability AI (Midjourney, et al.), a group of illustrators sued several AI companies, claiming that the defendants infringed their copyrights by copying millions of online images to train generative modelsreuters.com. In 2023, U.S. District Judge William Orrick confronted cutting-edge arguments in this class action. He found flaws in the claim that the AI output images themselves infringed specific artworks (because an AI-generated image is not an identical reproduction of any one painting) and dismissed that part of the case. However, the judge left intact the core allegation that the training process – the mass scraping and ingestion of copyrighted images – could violate the artists’ rights. This key distinction suggests that courts may treat the act of using protected works to train AI as a potential infringement, even if any given output is only loosely inspired by a training image. The artists have since filed an amended complaint with additional plaintiffs and details, and the case is ongoing.
Across these disputes, companies deploying generative AI have raised defenses like fair use (arguing that training on thousands of works transforms them into something new) or implied license, while rights-holders counter that unchecked data scraping turns AI models into “copyright-laundering devices” that offer the benefits of art without paying creators. No definitive rulings have emerged yet on these novel issues, but early court decisions show a cautious approach – allowing litigation over AI training practices to proceed, while scrutinizing claims about individual AI outputs. The resolution of cases like Getty v. Stability and the artists’ class action will likely shape how future AI models can be trained and whether new licensing systems or laws will be needed to balance innovation with creators’ rights.
Global Approaches to AI and IP Rights
Around the world, governments are adopting different strategies to address AI and IP rights, reflecting diverse legal traditions and policy priorities. In the European Union, regulators have been active in examining whether existing IP frameworks suffice. The EU’s 2019 copyright directive introduced text and data mining (TDM) exceptions that allow AI training on copyrighted materials for certain purposes, but with limitations. For non-commercial research, data mining is permitted by law, while for other uses the EU allows an opt-out – rights holders can explicitly reserve their works from mining, effectively requiring a license for use. As generative AI exploded, EU policymakers began debating further measures. One idea has been to create a new sui generis right over valuable datasets used in AI training, akin to Europe’s existing database right. This would give content owners a stronger claim (beyond ordinary copyright) to control and monetize the use of their data in AI development. So far, no such right has been enacted, but the mere consideration signals the EU’s inclination to ensure creators are not uncompensated in the AI age. The EU is also moving toward transparency obligations – under the upcoming AI Act, providers of generative AI models will likely have to disclose any copyrighted data used in training and respect opt-outs, reinforcing that scraping content without permission is discouraged in Europe.
The United Kingdom initially trailed a more permissive path post-Brexit, proposing in 2022 to allow broad text and data mining for any purpose without requiring licenses. However, pushback from creative industries led to a reversal. By December 2024, the UK government shifted to a more balanced approach, launching a consultation to introduce an “EU-style” TDM exception with an opt-out for rights holders. The UK’s proposal aims to “enhance right holders’ control of their material and their ability to be remunerated” while still facilitating AI innovation. In essence, the UK is now converging with the EU view that creators should have some say (and potential compensation) when their works feed commercial AI models, rather than allowing unfettered scraping.
In contrast, Japan has embraced one of the world’s most AI-friendly copyright regimes. Japan’s Copyright Act was amended in 2018 (effective 2019) to permit very broad text and data mining, explicitly allowing the use of copyrighted works for machine learning for any purpose – including commercial AI training – as long as the use is for “information analysis”. There is no opt-out for rights holders in Japan, and even the use of illegally obtained works in training might be tolerated under the letter of the law. These expansive exceptions led some to dub Japan a “machine learning paradise,” since copyright cannot easily be used to block AI development. The only caveat is that using works “for enjoyment” (i.e. simply consuming or redistributing them) is not covered by the exception, and uses that “unjustly harm” a copyright owner’s legitimate interests could be excluded. Japan’s permissive stance reflects a policy choice to prioritize AI growth (seen as vital for an aging society and tech competitiveness) over traditional copyright control.
Other jurisdictions have taken varied approaches as well. For example, Singapore’s law mirrors Japan’s by allowing very broad AI text-and-data mining, while countries like Canada and Australia are still evaluating possible reforms. Meanwhile, WIPO (the World Intellectual Property Organization) has convened international discussions since 2019 to seek common ground on AI and IP issues. No global treaty exists yet, but a general consensus has emerged that purely AI-generated works – without human creative input – are not eligible for copyright protection. Debates persist on questions of data ownership and fair use in AI training, with approaches continuing to diverge across different countries. This patchwork of approaches suggests that creators and AI developers face a complex international landscape – what is legal for training an AI in one country (e.g. Japan) might be infringement in another (e.g. the EU). Going forward, we may see efforts to harmonize certain rules, at least via soft law or industry codes of practice, to support both global AI development and the protection of IP rights.
Future of IP in the AI Era
As AI capabilities continue to advance, legal systems are under pressure to adapt. The future of IP in the AI era will likely involve a mix of new reforms, industry practices, and perhaps international coordination. One widely discussed strategy is developing better licensing frameworks for AI training data. Instead of today’s “wild west” of web scraping, content owners and AI firms could strike deals – for example, collective licensing mechanisms where AI developers pay into a fund that compensates creators whose works are used to train models. This approach, analogous to music licensing for radio or streaming, could balance interests by allowing AI to learn from vast datasets while ensuring creators get paid. Some large platforms are already moving in this direction: for instance, major image libraries have partnered with AI developers to license photos for training, and discussions are underway about standard licensing terms for text and code datasets. Legal scholars also suggest that data transparency will be key – AI models might come with “nutrition labels” disclosing their training sources, enabling more accountability and easier licensing audits.
Technological measures may assist as well. Watermarking AI content is emerging as a tool to identify AI-generated works and trace their origin. If AI outputs are cryptographically watermarked, it could help detect when an artwork or audio was AI-made, which in turn aids enforcement of IP rules. (For instance, a watermark could indicate which model produced a piece and whether it had rights to the training data.) Major AI developers are researching watermarking and content authentication systems in response to concerns about copyright and misattribution. Likewise, fingerprinting of training data – recording unique signatures of works ingested by an AI – could allow future systems to check if a user’s prompt is asking for something too close to a known copyrighted work and then adjust or refuse the output. These technical solutions are still nascent but hold promise for mitigating AI’s IP risks without overly restrictive laws.
On the policy front, expect ongoing refinement of legal doctrines. Copyright offices and patent offices may issue further guidance (as the U.S. Copyright Office did with its 2025 reports), and courts will continue to set important precedents. Some legislatures are considering targeted updates – for example, clarifying how AI-generated content law should treat works with minimal human input, or whether AI patent rights (like inventorship or ownership of AI-created inventions) require new rules. Internationally, bodies like WIPO could eventually spur treaties or model laws to achieve more consistency. An agreement on baseline data mining exceptions, or on the status of AI-generated works, is conceivable if consensus builds around core principles.
AI’s rapid ascent as a creative force is testing the limits of intellectual property law. So far, the response has been to reinforce the importance of human ingenuity – from insisting on human authorship in copyright and inventorship in patents, to demanding human oversight in trademarks. Yet simply applying old doctrines to new technology will not resolve all tensions. The coming years will likely see a combination of reforms and strategies: more nuanced laws, innovative licensing schemes, and collaborative efforts across borders. By developing adaptive IP frameworks – ones that reward human creativity, accommodate AI’s analytical power, and protect creators’ interests – we can work toward the right balance. The legal battles of today are setting critical precedents, but they also point toward solutions that will define the future of AI, copyright, and creativity in our increasingly AI-driven world.