AI Year in Review: A Busy 2022 for AI and IP Promises Even More in 2023

“Throughout 2021 and 2022, the world began to experiment with a massive influx of commercially available AI-assisted and AI-powered tools that can be used, whether knowingly or unknowingly, during the process of creating, researching, and innovating. Looking ahead to 2023, we will start witnessing the legal and regulatory impact of these tools.”

AIIn general, the adoption of artificial intelligence (AI) and machine learning technologies has the potential to impact society in many ways. These technologies can automate tasks and make them more efficient, which can lead to job displacement and other economic impacts. They can also be used to make decisions that affect people’s lives, such as in the criminal justice system or in hiring, which raises ethical concerns. Additionally, the development and use of AI and machine learning technologies can raise issues related to privacy and security.

What could be a more fitting way to open a 2022 year-in-review article on AI and machine learning than by asking OpenAI’s newly beta-released ChatGPT tool to contribute? The above paragraph was generated using ChatGPT’s conversational, chat-based dialog input. The initial request of ChatGPT was the prompt: “Explain the social impacts of artificial intelligence and machine learning technologies over the past year.”

This article couldn’t have written itself with the use of ChatGPT because, as the chatbot responded, “It is difficult for me to provide information on the specific social impacts of artificial intelligence and machine learning technologies over the past year because my training only goes up until 2021, and I do not have the ability to browse the internet.” There is a future in which such real-time data crunching capabilities will exist, to the extent they don’t already, in a publicly accessible and cost-effective, economically friendly manner. In the meantime, 2022 has seen significant legal, regulatory, and policy developments around the world across the fields of intellectual property law that will impact and shape the future uses and developments of AI and ML technologies.

A Busy Year for AI in the United States

The U.S. Copyright Office (USCO) will not knowingly register works that are not authored by a human. There is currently no guidance with regard to the level of human involvement required for registration; however, entirely generative works are not eligible for copyright protection. The U.S. Patent and Trademark Office (USPTO) will refuse an application, and not issue a patent if an AI or ML system is named as the inventor.

In October, the USCO continued its efforts to address registrations of generative works through a cancellation notice sent to Kristina Kashtanova in connection with her recently registered graphic novel, “Zarya Of The Dawn.” Kashtanova, an artist and artificial intelligence (AI) consultant and researcher, became widely publicized in September as the first person to successfully register a generative work. The images underlying her graphic novel were generated using the Midjourney AI text-to-image tool. As of November, Kashtanova, with the assistance of counsel, sent a response to the USCO and is waiting for the USCO’s review and decision regarding the next steps.

In February, the USCO upheld a refusal to register “A Recent Entrance to Paradise,” an AI-generated work created using an AI system developed by Dr. Stephen Thaler. In June, Thaler filed a complaint in the U.S. District Court in Washington, D.C., challenging the refusal. As part of his application, Thaler listed the AI system, Creativity Machine, as the author of the work and indicated himself to be the claimant. The federal complaint attempts to advance the arguments previously made by Thaler during the appeals process for the copyright application, as well as address the human authorship issues flagged by the USCO in its final refusal.

In addition to troubles with the USCO, Thaler faced an uphill battle pursuing a patent application for DABUS, an AI “device for the autonomous bootstrapping of unified sentience.” In an August decision, the U.S. Court of Appeals for the Federal Circuit sided with the USPTO’s refusal on the grounds that an inventor must be human. The denial leaves the U.S. Supreme Court as the next step for Thaler, which his counsel indicated is the plan for their appeal.

“As AI technology continues to evolve and questions arise about how copyright laws apply to the creation of AI-generated works,” explained the Copyright Alliance in its position paper on artificial intelligence, “it’s critical that the underlying goals and purposes of our copyright system are upheld and that the rights of creators and copyright owners are respected.”

The USPTO hosted multiple events throughout the last half of 2022 to explore AI and emerging technologies in an effort “to explore various issues resulting from the intersection of ET, including AI, and IP policy” while also promoting “greater awareness, openness, and inclusivity,” as explained in the Federal Register notice.

Efforts started in 2022 will continue into 2023 as various government groups seek to explore the impact of AI and ML technologies on IP rights and policy. In August, the Department of Commerce’s International Trade Administration (ITA) published a request for comment on international AI policies, regulations, and related measures that could impact U.S. exports of AI technologies.

In November, the USCO and the USPTO published a joint notice of inquiry requesting public response to 13 prompts and questions related to issues of IP law and policy associated with non-fungible tokens, or NFTs. While the study is not directly related to AI, there is an undeniable connection between NFTs and human authorship issues given that many NFT projects involve the use of AI and ML technologies for either assisting with the creation, or supporting the entire generation, of underlying assets for NFTs. The deadline for responses is in January 2023 with a series of three roundtable discussions to be hosted on the topics of copyright, patents, and trademarks.

In October, the White House published the Blueprint for an AI Bill of Rights, which contains a technical companion “that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The Blueprint is not a set of new laws or regulations. Instead, it’s a recommended framework for anyone to adopt and follow when developing AI and ML technologies. It remains to be seen to what extent, if any, legislative efforts in the realm of IP are proposed to address the use and development of AI and ML technologies. A the end of October, Senators Thom Tillis (R-NC) and Chris Coons (D-DE) asked the USPTO and the USCO to establish a joint commission on AI by October 2023 in an effort to understand “what the law should be,” as the senators highlighted in their letter. A report to congress is due by the end of 2024. However, on December 12, the USPTO and USCO sent a response letter to the Senators. In the letter, the Offices talk about the work they’re already undertaking in the areas of AI and its impact on IP rights. Additionally, they use the letter to highlight the need for more funding in order to avoid impacting already-approved uses of congressionally appropriated funds. The response letter highlights the congressional action involved with the 1974 establishment of a national commission to explore copyright protections for computer programs (CONTU), which included funding and payment for staff hired and a stipend to commission members.

In the private sector, Microsoft-owned GitHub is facing one of, if not the first, class action litigation actions taken with respect to claims of DMCA violations (removal of copyright management information), breach of open source license contracts, and more, in connection with the development of an AI tool. In June, GitHub announced Copilot, an OpenAI-powered tool that can be used to auto-generate code output ranging from a simple autocomplete to an entire function. The lawsuit, filed in November, claims, among other things, that GitHub did not have the necessary licenses or permissions to develop, or train, the AI model using open source software code users stored on its services.

It remains to be seen what, if any, impact the license grant within the GitHub Terms of Service may have on portions of this lawsuit, as well as language in which GitHub restricts itself from selling user content put on the platform (which GitHub may argue is not within the scope of its actions by training and building an AI model). Interestingly, LinkedIn, another Microsoft subsidiary, spent the year fighting against HiQ Labs over scraping data from LinkedIn’s websites.

“For attorneys who represent artists and creatives, future rulings from the USCO and the U.S. Supreme Court in the generative art context will provide critical guidance, but may prove disappointing,” noted Michelle Butler of Jayaram Law, Inc. “We are waiting to see whether the application of U.S. copyright law will paradoxically limit the ability of creatives to protect the outputs of new technologies and innovative methods of creation, rather than ‘promote the progress of science and the useful arts,’ as the framers of the Constitution intended when they granted Congress the authority to establish a copyright regime.”

Shutterstock announced a partnership with OpenAI to offer text-to-image generation services, while simultaneously prohibiting the sale of generative works from third-party AI tools, given the inability to validate the models, and underlying training data sets, that were used. Getty Images also announced a ban on generative works being sold on its platforms. Meanwhile, Adobe Stock announced a set of standards that, if met, would allow stock content submission of generative works. Generative works will be clearly labeled as such on the Adobe Stock platform, leaving it up to the customer licensee to determine whether or not they can accept the risk of using AI-generated assets or incorporating them into projects that contain human-authored works Interestingly, Adobe’s screenshots included in the announcement feature the work of Kashtanova generated with Midjourney.

The European Commission’s Copyright Study

In February, the European Commission published the “Study on copyright and new technologies” in an effort to guide policymakers, academics, and other stakeholders on the issues pertaining to copyright and AI, the focus of the second part of the study. The second part was divided into two sections: (1) the input of AI systems; and (2) the output of AI systems.

As it pertains to the input, the Study noted that “the scope of the reproduction right is still in the process of being defined by the European courts, especially when purely technical or intermediate copies are made such as within the process for training an AI algorithm through the analysis of protected elements.” The Study also made note of the potential for text and data mining, or TDM, exceptions to claims for copyright infringement over web scraping activities both within a commercial or purpose outside of scientific research or academic activities. Lastly, the study appeared to favor ignoring potential objections by creators under their moral rights attached to copyright, instead of the “more ambitious approach [which] would be to (partly) harmonise the moral rights.”

Concerning the output, the Study notes that “the AI-generated output is not protected under copyright in the absence of human creative choices.” [Emphasis removed]. The Study concludes that there is no need for new related rights for the output of AI systems, or additional recognition of protections for an artist’s particular style “unless some significant and recognizable features of a protected work or performance are reproduced in the AI output.” The study notes the existence of other claims available to creators under unfair competition or image and personality rights. The Study also recommends avoiding the creation of undue burden on creators by establishing a burden of proof that a work was created, and not generated, by an AI system. It extends this point into a recommendation that no requirement be implemented under which a creator would be forced to disclose the use of an AI system in the creation of their work, for fear of creating “its impact on the creators’ artistic freedom and their personality rights.”

Expanding the UK’s Text and Data Mining Exception

In June, the UK’s Intellectual Property Office released its response to the UK government’s recent request for evidence and views on a range of options of how AI should be dealt with in the patent and copyright systems. The Office focused on three areas: (1) computer-generated works generated without humans; (2) licensing or exceptions for TDM; and (3) patent protection for “AI-devised” inventions.

For computer-generated works, also known as generative works, the Office announced that there are no planned changes to the current law which allow for the protection of works without a human author. Currently, the UK recognizes works without a human author and grants 50 years of protection.

For TDM, the Office announced that there is a plan to “introduce a new copyright and database exception which allows TDM for any purpose.” The Office added that “[r]ights holders will still have safeguards to protect their content, including a requirement for lawful access.” Currently, there’s a research exception to copyright law for data mining in the UK. However, the proposed path forward is to adopt an exception for “any purpose” and without the right to opt-out.

“The use of the creative community’s copyrighted works for AI training purposes should be subject to marketplace licenses, the Copyright Alliance noted in its submission to the UK Parliament’s House of Commons Science and Technology Committee’s recent Call for Evidence on Governance of AI. “Indeed, there are already organically developed and robust licensing markets for use of copyrighted works as training materials for AI, and it is essential that these markets are preserved and protected.”

For “AI-devised inventions,” or whether AI can be an inventor, the Office announced that there’s no change. The focus will be on working towards an international solution concerning inventions made using AI. Under current UK law, a patent may be “granted for an AI-assisted invention provided the application satisfies the legal requirements set out in the UK Patents Act 1977.”

Thaler’s Globe-Trotting AI Fight

In Germany, AI cannot be an inventor but nuances as to how patent applications are drafted should be understood following a recent court decision.

Thaler obtained a unique outcome when Germany’s Federal Patent Court set aside a decision of the German Patent and Trademark Office (DPMA) in which the DPMA refused a patent application that included DABUS, instead of Thaler, as the inventor. The German Court decision was made in November 2021 following oral arguments but was not published on the court’s website until April 2022.

The court explained that an acceptable statement on the application would have included “Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention,” as well as a few other alternatives. As previously reported by IPWatchdog, an unofficial translation of the decision indicates that the court said: “in the absence of an explicit prohibition of unnecessary information in the Patent Regulations, an inventor (who may also be supported in this respect by his personal right as an inventor) should not necessarily be prevented from including additions of the kind at issue here in the official form P 2792.”

The decision may be appealed to, and potentially reversed by, the Federal Court of Justice, which is the highest court available on the subject in Germany.

In April, the Federal Court of Australia changed its mind regarding whether or not it’s possible to have a non-human inventor for a patent application. Thaler had attempted to list DABUS as the inventor. On appeal, the Court reversed its 2021 decision by a primary judge’s holding that Australia’s law contained no requirement for human, or natural person, inventorship. The judge distinguished between an inventor, which could potentially be a human or the non-human machine, and that of patent ownership, which would require a human.

The Commissioner of the Australian Patent Office appealed and a panel of five judges concluded that human authorship was a requirement. The decision also notes the many remaining open questions as to the policy considerations, but that “the Court must be cautious about approaching the task of statutory construction by reference to what it might regard as desirable policy, imputing that policy to the legislation, and then characterising that as the purpose of the legislation[.]”

Professor Ryan Abbott, an attorney that is working with Thaler on these challenges through their Artificial Inventor Project, argued in support of allowing machines to be named as inventors on patents in March while on the Clause 8 Podcast, stating:

“The U.S. never says an inventor needs to be an actual person. It uses words like individual. But individual sometimes means an actual person and sometimes it doesn’t. It is entirely consistent with the purpose of the Patent Act to interpret that an individual in the context of an invention could be a machine… There is, in our view, no reason to take an overly narrow, textualist interpretation of it. For example, whomever in 35 U.S.C. 271 — which refers to infringement — can refer to anything, including a state or a corporation, as something that could commit infringement. If whomever can refer to a company or a state, I see no reason why individual couldn’t conceivably apply to a machine.”

Looking Ahead to 2023

Where does all of this take us as we move into the future? ChatGPT puts it best when asked about its thoughts on “the future legal implications of using AI and ML technologies to generate creative works and inventions,” to which it responded: “Overall, the use of AI and machine learning technologies to generate creative works and inventions raises a number of complex legal and ethical issues that will need to be addressed in the future.”

Throughout 2021 and 2022, the world began to experiment with a massive influx of commercially available AI-assisted and AI-powered tools that can be used, whether knowingly or unknowingly, during the process of creating, researching, and innovating. Looking ahead to 2023, we will start witnessing the legal and regulatory impact of these tools as courts, regulators, and policymakers begin to make decisions and take action on the practical implications of AI and ML technologies on existing IP laws and regulations. Trends from court decisions tend to defer to lawmakers and policymakers if there are to be considerations, if any, given to the concept of non-human authorship and inventorship. The input governments collect from individuals, groups, and corporations from across industries will be ever-important as drafts of law and policy begin to take shape.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

4 comments so far.

  • [Avatar for Michal]
    Michal
    December 22, 2022 03:04 pm

    The issue is the artwork generated is at the bequest of the author, in my case a book. Every word was never intended to be the property on an algorithm, a mathematical sequenced. The art work once created seems appropriate as a starting point but is not the end results of copyright denied. The author uses the AI to start a dialog but then recreates the story, rearranges the artwork then publishes the results. My simple question is does the use of numbers decode the basis of creativity, or use of math become the final frontier for creativity. My answer is math might seem a simple way to skirt the issue of who is the ibtellect.

  • [Avatar for Anon]
    Anon
    December 21, 2022 03:33 pm

    A nicely composed review.

  • [Avatar for Franklin Graves]
    Franklin Graves
    December 21, 2022 06:08 am

    Thanks, Louis!

  • [Avatar for Louis Van-Heurck]
    Louis Van-Heurck
    December 20, 2022 02:16 pm

    Extremely interesting article!