“Fair Use” is Not a Great Business Plan

Lately, we’ve seen several headlines and comments from tech giants say that AI ventures simply cannot succeed if they are forced to contend with the copyrights in the billions of works they have scraped for the purpose of machine learning (ML). When these headlines are paired with the rampant assertions that ML is inherently fair use—a subject addressed in last Wednesday’s Senate Judiciary Committee (SJC) hearing on AI and journalism—one has to wonder about the business decisions being made before generative AI exploded last year.

In many posts on this blog, including at least a few written during “Fair Use Week,” I have repeated the caveat that “fair use” is not a magic phrase that makes infringement claims disappear. Usually, that advice is directed at small and independent users of works, suggesting they not listen to Big Tech and its network of academics and activists, who will not be on the hook for the small guy’s copyright infringement. I always assumed the big guys knew better, that they were merely chanting the “fair use” mantra as a rhetorical device in the blogosphere to promote the anti-copyright agenda. But maybe they don’t know better.

If I were an AI investor asking about potential liability, and the founders told me, “Don’t worry, what we’re doing is fair use,” my immediate response would be to ask whether there is sufficient funding for major litigation, to say nothing of predicting the outcome of that litigation. Because simply put, the party who conjures the term “fair use” has effectively assumed that a potential liability for copyright infringement exists. And if that assumption is a bad business decision, then that’s the founders’ problem, not a flaw in copyright law.

No matter what the critics say, or how hard certain academics try to alter its meaning, the courts are clear that fair use is an affirmative defense to a claim of copyright infringement, which means that building a business venture on an assumption of fair use is tantamount to assuming that lawsuits are coming. And if it’s a multi-billion-dollar venture that potentially infringes millions of works owned by major corporations, then the lawsuits are going to be big—perhaps even existential.

Do Not Expect Congress to Change Fair Use in Any Direction

Notably, as reported in Wired, Conde Nast CEO Roger Lynch stated at one point during questioning by the SJC last week, “If Congress could clarify that the use of our content, or other publisher content, for the training and output of AI models is not fair use, then the free market will take care of the rest,” to which Sen. Hawley replied that this seems reasonable. But I wonder about this exchange. While it is encouraging to find the senators more sympathetic with the news organizations than with the AI developers, I doubt (and would not even hope) that Congress is going to amend the law to explicitly state that ML is categorically never fair use.

Fair use comprises a history of judge-made law that was codified into statute as Section 107 of the 1976 revision of the U.S. Copyright Act. But the statute does not draw bright lines stating that X is always fair use and Y is never fair use, and for good reason. Because justice for all parties is best served by a court weighing the specific facts of a specific use of a specific work, or body of works. Hence, an attorney will tell you that fair use is a “fact intensive” consideration.

If Congress were to explicitly declare, for instance, that ML can never be fair use, this would be a significant departure from doctrine, and one that is preemptively unjust to the potential AI developer with a fact pattern that would favor a finding of fair use. As much as I find the major generative AI companies to be some combination of arrogant and/or useless, and as much as I scorn their generalizations to-date about fair use, it would be wrong to endorse legislative revision of the fair use doctrine as a sound response.

In fact, if the court were to find fair use for ML in New York Times v. Open AI (and I doubt it will), and Congress sought to remedy that outcome, it would still not make sense to amend Section 107. If anything, news organizations and other copyright owners would likely seek a new section of the Copyright Act tailored to the nature of the new form of harm, which Big Tech would then blindly oppose with every available resource. For instance, it is possible that the Times would not currently be suing Open AI if the tech industry had not opposed the Journalism Competition and Preservation Act (JCPA), which would have temporarily exempted news organizations from antitrust barriers to collective bargaining for licensing their content.

Regardless, no party should be asking Congress to “clarify fair use” in response to AI. If the AI founders and investors made a bad bet on an ultimate finding of fair use, that’s tough noogies for them. But neither should content creators want Congress to open that particular can of worms and disturb the fair use case law. Of course, where Congress should intervene is to address harms caused by AI where no law currently applies. On that subject, the next post discusses the recently proposed No AI FRAUD Act.


Phot source by areporter.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)