Edition 4 of the AI Legal News Summer Roundup

White & Case LLP

In this edition, key themes include creators and consumers seeking more control and protection over how their content is used to train AI models (whether under copyright law or privacy laws), and governments grappling with the delicate balance between overseeing and regulating AI development, and fostering innovation.

We report on the US copyright court decision confirming that copyright doesn't extend to AI-generated outputs "operating absent any guiding human hand" (see Update 1). In this edition, we also saw major international news organizations releasing an open letter advocating for stronger intellectual property safeguards with respect to AI, including potential licensing arrangements (see Update 6) and Zoom revising its terms to explicitly clarify that customer content will not be used in training AI models after some privacy concerns were raised by the public (see Update 7).

Finally, we also observe how governments worldwide are attempting to address AI's dual potential: China's new AI regulations came into force (see Update 9); South Korea launched an AI working group (see Update 10); and in the US, concerns over bias in algorithms were raised (see Update 3); the White House confirmed an AI executive order is forthcoming (though details are limited); and the Federal Election Commission sought public comment on a petition prohibiting the use of AI in campaign advertisements (see Update 5).

In this fourth issue, we highlight 10 key developments we've identified from the United States, Europe, and APAC between August 4 and August 18, 2023:

1. United States: Washington, DC federal court denies copyright protection for AI-generated art

On August 18, in Stephen Thaler v. Shira Perlmutter, No. 1:22-cv-01564 (D.D.C. August 18, 2023), the DC district court agreed with the US Copyright Office's (USCO) decision to deny computer scientist Stephen Thaler's copyright registration for artwork autonomously generated by AI in the absence of any human involvement. The court held that the "Register did not err in denying the copyright registration application," and "United States copyright law protects only works of human creation." While the court agreed that "[c]opyright is designed to adapt with the times," the court ultimately held that "[c]opyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand" and that "[h]uman authorship is the bedrock requirement of copyright." In looking forward, however, the court agreed that the "increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an "author" of a generated work," as well as how to assess the originality of AI-generated art trained on unknown existing copyrighted works.

2. United States: New York state proposes legislation concerning use of AI in employment decisions, while the US Equal Employment Opportunity Commission (EEOC) settles AI hiring discrimination lawsuit

On August 4, New York state senator Brad Hoylman-Sigal (D-NY) introduced legislation S7623 that aims to amend current labor laws making it unlawful to surveil employees residing in New York State via an electronic monitoring tool unless certain requirements are met and notice is provided to the employee. The bill would also completely ban certain uses of electronic monitoring tools, including "… tool[s] that incorporate facial recognition, gait, or emotion recognition technology," or employers "rely[ing] solely on employee data collected through electronic monitoring when making hiring, promotion, termination, disciplinary, or compensation decisions." This is another example of state legislators aiming to curb AI usage to limit bias or discrimination, and requiring data monitoring and auditing for those using AI tools.

Meanwhile, at the federal level, the EEOC clarified the plan they intend to enforce against companies that use AI to effect unlawful discrimination. In a joint notice of settlement issued on August 9, the EEOC announced they have settled the AI discrimination lawsuit against a tutoring company that allegedly discriminated against applicants based on their birth dates. The EEOC alleged that the tutoring company "violated federal law by programming [their] online recruitment software to automatically reject older applicants because of their age," and sought back pay for the denied job applicants. EEOC Chair Charlotte A. Burrows stated that discrimination based on age is unlawful, "even when technology automates the discrimination." Burrows further highlighted that the case was filed as part of the EEOC's recently launched "Artificial Intelligence and Algorithmic Fairness Initiative" to ensure the use of AI in employment decisions and the workplace lawfully satisfies requirements established by federal civil rights law.

3. United States: Prisma Labs avoids trial in privacy class action with arbitration clause

In an order issued on August 8 in Flora et al v. Prisma Labs, Inc. (N.D. Cal. Feb 15, 2023), Judge Charles Breyer granted Prisma Labs' motion to compel arbitration, allowing the AI company to avoid trial because the plaintiffs had agreed to terms of use containing a binding arbitration clause (including a denial of class arbitrations and class actions). The plaintiffs filed a class action alleging that the defendant's AI art application "Lensa" "collect[ed] their facial geometry" in violation of the Illinois Biometric Information Privacy Act, and arguing that the arbitration clause was unconscionable because, among other reasons, the plaintiffs had "no ability to negotiate" the terms of use such that they were a "contract of adhesion." The court denied the plaintiffs' claims, finding that the arbitration provision gave users a meaningful opportunity to opt out, and distinguished this from employment agreements where employees (whose livelihood depends on their employment) would feel some pressure not to opt out; reasoning a consumer is not likely to have the same hesitation when deciding whether to opt out of this "non-essential" service (i.e., a photo editing app). With several class action lawsuits having been filed against AI companies over the past few months (as reported on in our first and third editions), this case may encourage AI companies offering "non-essential" services to ensure such arbitration provisions (and class action denials and opt-out mechanisms) are included in their terms of service.

4. United States: Consumer Financial Protection Bureau (CFPB) proposes new regulation that will restrict data brokers' ability to track and sell personal information to fuel AI and targeted advertisements

On August 15, the CFPB announced its intention to crack down on the tracking and selling of personal data in a roundtable held at the White House, by extending the Fair Credit Reporting Act to encompass data brokers and similar businesses that track, sell and otherwise profit from Americans' personal data, usually without their consent or knowledge. The proposal seeks to ban the sale of consumer data, specifically focusing on credit-header data. This data includes the segment of a credit report containing personal identification details, such as a person's name, address and Social Security number. The agency is particularly focused on curtailing the use of this data for the purposes of targeted digital advertising and training AI (which includes technologies that make automated decisions, such as AI chatbots that can spontaneously address consumer inquiries).

5. United States: Federal Election Commission (FEC) seeks public comment regarding use of AI in campaign advertisements by October 16, 2023

On August 16, the FEC published a notification seeking public comment on a petition for rulemaking regarding the use of AI in campaign advertisements. The petitioner, Public Citizen, asked the FEC to amend its regulation at 11 CFR 110.16 that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties so that the regulation is clear that the prohibition applies to use of AI in campaign advertisements. The public comments, which can be commented electronically via the FEC's website are due by October 16, 2023.

6. Global: Major international news organizations release open letter calling for regulatory and industry action to enhance transparency and intellectual property protection in AI

In an open letter published on August 9, a group of major international news organizations, including Agence France-Presse, the European Pressphoto Agency, Getty Images, News Media Alliance, The Associated Press, and The Authors Guild, called for enhanced transparency of AI training sets and better protection of copyrighted material being used to train AI models. The news organizations state: "Generative AI and large language models […] disseminate [proprietary media] content and information to their users, often without any consideration of, remuneration to, or attribution to the original creators. Such practices undermine the media industry's core business models." Specifically, the signatories advocate for the following regulatory and industry action in the open letter: (1) "Transparency as to the makeup of all training sets used to create AI models;" (2) "Consent of intellectual property rights holders to the use and copying of their content in training data and outputs;" (3) "Enabling media companies to collectively negotiate with AI model operators and developers regarding the terms of the operators' access to and use of their intellectual property;" (4) "Requiring generative AI models and users to clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content;" and (5) "Requiring generative AI model providers to take steps to eliminate bias in and misinformation from their services." This letter is consistent with news media organizations (like The Associated Press and The New York Times) requiring online platforms to license their content.

7. Global: Zoom releases statement and amends Terms of Service (Terms) following discussions about possible implications under EU privacy law over potential use of customer data to train AI models

Zoom issued a statement and updated its Terms to clarify that customer content is not used to train Zoom's or third-party AI models. Zoom originally published a statement on August 7 and then updated it on August 11 to reflect the latest version of the Terms (which were revised twice). Both the Terms and statement now state: "Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like customer content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models." On August 6, a Stack Diary article1 first pointed out changes to Zoom's Terms from March that would have potentially given Zoom broad control over customer data to train AI models.

8. Germany: Federal Commissioner for Data Protection and Freedom of Information calls for rules on using publicly available data to train AI models

In an August 20 interview on the German radio program "Deutschlandfunk,"2 Germany's Federal Commissioner for Data Protection and Freedom of Information Ulrich Kelber called for rules on the use of publicly available data to train AI models and emphasized the need for protection of personal data and of the rights under the EU's General Data Protection Regulation (GDPR). Kelber, who oversees data protection at federal public agencies and certain companies in Germany, commented on the need for: (a) rules governing the purposes for which publicly available data may be processed; (b) technical regulations to block systems from using data; (c) requirements to clearly pseudonymize or anonymize such data before it is used as training data; and (d) rules mandating that AI developers give AI chatbot users the ability to exclude personal data from being used by the AI software.

9. China: Cyberspace Administration of China implements measures for the management of generative AI services

In our first edition, we reported on the Cyberspace Administration of China's interim measures for the management of generative AI services that were released on July 13 (Interim Measures). The Interim Measures required generative AI services to respect social morality and ethics, comply with existing laws and regulations, and adhere to the core values of socialism and prevention of discrimination. The Interim Measures entered into force on August 15.

10. South Korea: South Korea launches a working group to facilitate discussions regarding the regulation of AI

The South Korean Ministry of Science and Information and Communication Technology (ICT) has formed an expert group composed of four subcommittees and 41 members to discuss the regulation of AI. The group will examine how regulations can promote the development and utilization of AI, particularly with respect to personal information, copyright and information security. Further discussions will focus on how to establish trust in the use of AI and, in doing so, how to promote the uptake of AI in industries in which it is currently underutilized. Finally, the group will consider whether reforms are needed in relation to the copyright and compensation for work created by AI. These discussions will inform South Korea's AI legislative roadmap, due to be released in late 2023.

1 Alex Ivanovs, Zoom's Updated Terms of Service Permit Training AI on User Content Without Opt-Out, Stack Diary, August 6, 2023.
2 Johannes Kuhn, Interview der Woche, Datenschutz-Beauftragter Kelber sieht Widerspruchslösung kritisch, Deutschlandfunk, August 20, 2023.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© White & Case LLP | Attorney Advertising

Written by:

White & Case LLP
Contact
more
less

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide