Grossman LLP | <strong >Copyright Office and Courts Continue to Wrestle With How Copyright Law Applies to AI Technology and AI-Generated Content</strong >
This links to the home page
Art Law Blog
FILTERS
  • Copyright Office and Courts Continue to Wrestle With How Copyright Law Applies to AI Technology and AI-Generated Content
    03/11/2025
    A report from the U.S. Copyright Office and a recent federal court decision are adding to the already complex legal landscape in which artificial intelligence (AI) tools are rapidly evolving.  These new developments are relevant to anyone interested in how AI is changing the creative arts and copyright law.
     
    The Input Problem and the Output Problem
     
    In previous blog posts (see here and here), we’ve written about two particular aspects of AI that pose unique copyright law questions: let’s call them the “input problem” and the “output problem.”  By “input problem,” we mean the fact that most AI tools must be “trained” with a huge amount of “input,” i.e., preexisting content, authored by others, and much of it copyrighted.  That content is gathered, stored, and run through computer models, to help the AI tool “learn” how to respond to users’ prompts.  This poses a copyright law question: does that AI use of preexisting copyrighted content infringe on the rights of all the creators of that content used for training?  Separately, by “output problem,” we mean the fact that AI tools, once trained, are able to respond to user prompts by generating novel content—but should those users then be able to obtain copyright protection for the creations the AI tool generated?  This blog post explores two recent developments that are grappling with these problems in real time.
     
    The Output Problem: New Guidance On Copyrightability of Works Created With The Assistance Of AI
     
    A few weeks ago, the U.S. Copyright Office issued a report that seeks to provide guidance on the question of the extent to which a work created with AI tools is eligible to be copyrighted by the user.  The report, which came after a period of time allotted for the public to submit comments, reaffirms that copyright will not protect “purely AI-generated material, or material where there is insufficient human control over the expressive elements.”  But “[w]hether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.” 
     
    The report seeks to place the questions regarding AI in the context of earlier case law.  For example, the report discusses decisions regarding the copyrightability of works created by machines (such as photographs), and concludes that the use of a technological tool does not per se foreclose copyright protection.  But the report also discusses legal standards regarding what kind of contribution is required for a person to be an “author” for copyright purposes (for example, case law has held that an organization commissioning a sculpture was not a co-author, even if it provided detailed suggestions and directions, because such contributions are not expression, but “unprotectible ideas”).  From this, it concludes, an AI system user’s contribution of simple prompts does not make the user the “author” of the system’s resulting output.  Prompts are essentially just instructions conveying unprotectible ideas; “prompts may reflect a user’s mental conception or idea, but they do not control the way that idea is expressed.”  Also key to this conclusion was the fact that fact that, in many AI systems, identical prompts can generate many different outputs; this, the report notes, “further indicates a lack of human control.”  While some “element of randomness does not eliminate authorship,” there must be a greater degree of human control over the expression.  The report also notes that repeatedly revising prompts does not change the copyrightability analysis, even though refining prompts can be time-consuming and difficult. 
     
    In contrast, the Copyright Office opines that copyright protection is more likely to extend to AI-generated works where the prompt is itself an expressive human-authored input—for example, where an artist uploads an original illustration and instructs the system to modify it in specific ways, or uploads an original story and instructs the system to edit it in a particular manner.  “These types of expressive inputs, while they may be seen as a form of prompts, are different from those that merely communicate desired outcomes”—the user is contributing more than just an idea, and that creative starting point constrains the universe of results that the AI system will generate, so that the user’s expressive elements are often clearly perceptible in the output.
       
    And finally, the Copyright Office also indicated that copyright protection is likely to attach in situations where a user takes AI-generated outputs and modifies or arranges them in a creative way (for example, selecting and arranging AI-generated images with human-authored text to create a comic book, which would then be protectable as a compilation).  This “selection, coordination, and arrangement” was the basis for the recent copyright registration of an artwork that, although created wholly within an AI tool, involved a significant degree of human editing and selection of dozens of separate regions and elements of the work.  Likewise, the report confirms that “the inclusion of elements of AI-generated content in a larger human-authored work does not affect the copyrightability of the larger human-authored work as a whole (for example, a film that includes AI-generated special effects).
     
    The Input Problem: Courts Weigh Infringement Claims By Content Creators Whose Works Have Been Used To Train AI Tools
     
    Then there’s the issue of the content that’s been “fed” into AI systems.  A number of cases are already working their way through the courts, involving plaintiffs ranging from news outlets to novelists to visual artists, all of whom claim that their work was used without their permission to build, teach, and strengthen AI technology, enabling it to churn out content similar to the plaintiffs’.  
     
    In February, one federal court in Delaware examined a particular example of this type of infringement theory, and sided with the plaintiffs.  The defendant was a startup called Ross Intelligence, which was building an AI-powered legal research search engine.  In doing so, however, it made use of content created by Thomson Reuters company Westlaw—not just public domain content such as court decisions, but also summaries and analysis that had been created for Westlaw.  Westlaw sued for infringement.
     
    On a motion for partial summary judgment, the court held that there had been actual copying by Ross: the evidence of actual copying was “so obvious that no reasonable jury could find otherwise.”  And the court rejected several of Ross’s defense, including the “merger” defense (which posits that the idea at issue was so close to the expression that they merged with the expression, making it uncopyrightable), and the scenes à faire defense (which reasons that stock elements that are standard to the nature of a work are not copyrightable). 
     
    Most crucially, though, the court rejected the application of the fair use defense (see here for our other blogs on this complex copyright concept).  On the first factor, the court held that Ross’s use was commercial and not transformative, because it didn’t have a “further purpose or different character” than Westlaw’s—indeed, Ross’s aim was to create a product to compete with Westlaw.  Further, the court explained (citing the Supreme Court’s decision in Warhol, see here for our analysis on that case), that Ross’s copying was not “reasonably necessary to achieve the user’s new purpose”; there was, the court held, nothing Westlaw created “that Ross could not have created for itself or hired” a third party to create for it without infringing.  On the second factor, the court acknowledged that Westlaw’s content was not highly creative; it is more like a factual compilation, and that favored Ross.  The third factor likewise favored Ross, because Ross’s output to an end user does not include Westlaw content.  But as to the most important fourth factor, the court was swayed by the fact that Ross sought to create a “market substitute” for Westlaw’s legal search tool.  Moreover, Ross also undeniably interfered with the potential market for Westlaw to license its content for AI training purposes—indeed, Ross had sought to do a licensing deal with Westlaw.
     
    Assuming this decision is upheld on any appeal, a few details may make a big difference in how much this new decision will guide other courts dealing with AI lawsuits.  First, it was undisputed that Ross’s AI was not “generative” in that it did not write new content itself; it was more like a search tool.  Second, as one scholar has pointed out (see here), Ross’s “use” was very specific to analysis of judicial opinions, and in that respect is quite different from AI tools that have far broader capabilities and applications, and are not designed to replace a specific tool.  It remains to be seen whether these and other differences will make this case a broadly-cited precedent or a relatively limited application to a specific use.
     
    What’s Next
    A number of other important AI-copyright cases are likely to reach decisions in the coming year, and those cases will be dealing with many of the same questions that the Delaware decision did, including substantial similarity and fair use, but we are a long way from attaining universal clarity on those issues.  And interestingly, the Copyright Office indicates there will be more guidance forthcoming on what we are calling the “input problem”; the recent report indicates that a subsequent report “will turn to the training of AI models on copyrighted works, licensing considerations, and allocation of any liability.”  We’ll continue to monitor those decisions as they come. 
    ATTORNEY: Kate Lucas
    CATEGORIES: CopyrightFair UseLegal Developments