Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

Automating Entertainment: Writers Demand that Studios Not Use AI

When the Writers Guild of America (WGA) came with their list of demands in the strike that has already grinded production on many shows to a halt, chief among them was that the studios agree not to use artificial intelligence to write scripts. Specifically, the Guild had two asks: First, they said that “literary material,” including screenplays and outlines, must be generated by a person and not an AI; Second, they insisted that “source material” not be AI-generated.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios, rejected this proposal. They countered that they would be open to holding annual meetings to discuss advancements in technology. Alarm bells sounded as the WGA saw an existential threat to their survival and that Hollywood was already planning for it.

Writers are often paid at a far lower rate to adapt “source material” such as a comic book or a novel into a screenplay than they are paid to generate original literary material. By using AI tools to generate an outline or first draft of an original story and then enlisting a human to “adapt” it into screenplay, production studios potentially stand to save significantly.

Many industries have embraced the workflow of an AI-generated “first draft” that the human then punches up. And the WGA has said that its writers’ using AI as a tool is acceptable: There would essentially be a robot in the writers’ room with writers supplementing their craft with AI-generated copy, but without AI wholly usurping their jobs.

Everyone appears in agreement that AI could never write the next season of White Lotus or Succession, but lower brow shows could easily be AI aped. Law and Order, for instance, is an often cited example. Not just because it’s formulaic but because AIs are trained on massive data sets of copyrighted content and there are 20 seasons of Law and Order for the AI to ingest. And as AI technology gets more advanced who knows what it could do? Chat GPT was initially released last November and as of writing we’re on GPT-4, a far more powerful version of a platform that is advancing exponentially.

The studios’ push for the expanded use of AI is not without its own risks. The Copyright Office has equivocated somewhat in its determination that AI-generated art is not protectable. In a recent Statement of Policy, the Office said that copyright will only protect aspects of the work that were judged to have been made by the authoring human, resulting in partial protections of AI-generated works. So, the better the AI gets—the more it contributes to cutting out the human writer—the weaker the copyright protection for the studios/networks.

Whether or not AI works infringe the copyrights on the original works is an issue that is currently being litigated in a pair of lawsuits against Stability AI, the startup that created Stable Diffusion (an AI tool with the impressive ability to turn text into images in what some have dubbed the most massive art heist in history). Some have questioned whether the humans who wrote the original episodes would get compensated, and the answer is maybe not. In most cases the scripts were likely works for hire, owned by the studios.

If the studios own the underlying scripts, what happens to the original content if the studios take copyrighted content and put it through a machine that turns out uncopyrightable content? Can you DMCA or sue someone who copies that? As of this writing, there are no clear answers to these questions.

There are legal questions and deeper philosophical questions about making art. As the AI improves and humans become more cyborgian, does the art become indistinguishable? Prolific users of Twitter say they think their thoughts in 280 characters. Perhaps our readers can relate to thinking of their time in 6 minute increments, or .1’s of an hour. Further, perhaps our readers can relate to their industry being threatened by automation. According to a recent report from Goldman Sachs, generative artificial intelligence is putting 44% of legal jobs at risk.

© Copyright 2023 Squire Patton Boggs (US) LLP

For more Employment Legal News, click here to visit the National Law Review.

To AI or Not to AI: U.S. Copyright Office Clarifies Options

The U.S. Copyright Office has weighed in with formal guidance on the copyrightability of works whose generation included the use of artificial intelligence (AI) tools. The good news for technology-oriented human creative types: using AI doesn’t automatically disqualify your work from copyright protection. The bad news for independent-minded AI’s: you still don’t qualify for copyright protection in the United States.

On March 16, 2023, the Copyright Office issued a statement of policy (“Policy”) to clarify its practices for examining and registering works that contain material generated by the use of AI and how copyright law’s human authorship requirements will be applied when AI was used. This Policy is not itself legally binding or a guarantee of a particular outcome, but many copyright applicants may breathe a sigh of relief that the Copyright Office has formally embraced AI-assisted human creativity.

The Policy is just the latest step in an ongoing debate over the copyrightability of machine-assisted products of human creativity. Nearly 150 years ago, the Supreme Court ruled at photographs are copyrightable. See Burrow-Giles Lithographic Company v. Sarony, 111 U.S. 53 (1884). The case involved a photographer’s claim against a lithographer for 85,000 unauthorized copies of a photograph of Oscar Wilde. The photo, Sarony’s “Oscar Wilde No. 18,” is shown below:

Sarony’s “Oscar Wilde No. 18"

The argument against copyright protection was that a photograph is “a reproduction, on paper, of the exact features of some natural object or of some person” and is therefore not a product of human creativity. Id. at 56. The Supreme Court disagreed, ruling that there was sufficient human creativity involved in making the photo, including posing the subject, evoking the desired expression, arranging the clothing and setting, and managing the lighting.

In the mid-1960’s, the Copyright Office rejected a musical composition, Push Button Bertha, that was created by a computer, reasoning that it lacked the “traditional elements of authorship” as they were not created by a human.

In 2018, the U.S. Court of Appeals for the Ninth Circuit ruled that Naruto, a crested macaque (represented by a group of friendly humans), lacked standing under the Copyright Act to hold a copyright in the “monkey selfie” case. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The “monkey selfie” is below:

Monkey Selfie

In February 2022, the Copyright Office rejected a registration (filed by interested humans) for a visual image titled “A Recent Entrance to Paradise,” generated by DABUS, the AI whose claimed fractal-based inventions are the subject of patent applications around the world. DABUS’ image is below:

“A Recent Entrance to Paradise”

Litigation over this rejected application remains pending.

And last month, the Copyright Office ruled that a graphic novel consisting of human-authored text and images generated using the AI tool Midjourney could, as a whole, be copyrighted, but that the images, standing alone, could not. See U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023).

The Copyright Office’s issuing the Policy was necessitated by the rapid and remarkable improvements in generative AI tools over even the past several months. In December 2022, generative AI tool Dall-E generated the following images in response to nothing more than the prompt, “portrait of a musician with a hat in the style of Rembrandt”:

Four portraits generated by AI tool Dall-E from the prompt, "portrait of a musician with a hat in the style of Rembrandt."

If these were human-generated paintings, or even photographs, there is no doubt that they would be copyrightable. But given that all four images were generated in mere seconds, with a single, general prompt from a human user, do they meet the Copyright Office’s criteria for copyrightability? The answer, now, is a clear “no” under the Policy.

However, the Policy opens the door to registering AI-assisted human creativity. The toggle points will be:

“…whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” 

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” 

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This will necessarily be a case-by-case inquiry.” 

See Policy (citations omitted).

Machine-produced authorship alone will continue not to be registerable in the United States, but human selection and arrangement of AI-produced content could lead to a different result according to the Policy. The Policy provides select examples to help guide registrants, who are encouraged to study them carefully. The Policy, combined with near future determinations by the Copyright Office, will be critical to watch in terms of increasing likelihood a registration application will be granted as the Copyright Office continues to assess the impacts of new technology on the creative process. AI tools should not all be viewed as the “same” or fungible. The type of AI and how it is used will be specifically considered by the Copyright Office.

In the short term, the Policy provides some practical guidance to applicants on how to describe the role of AI in a new copyright application, as well as how to amend a prior application in that regard if needed. While some may view the Policy as “new” ground for the Copyright Office, it is consistent with the Copyright Office’s long-standing efforts to protect the fruits of human creativity even if the backdrop (AI technologies) may be “new.”

As a closing note, it bears observing that copyright law in the United Kingdom does permit limited copyright protection for computer-generated works – and has done so since 1988. Even under the U.K. law, substantial questions remain; the author of a computer-generated work is considered to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” See Copyright, Designs and Patents Act (1988) §§ 9(3), 12(7) and 178. In the case of images generated by a consumer’s interaction with a generative AI tool, would that be the consumer or the generative AI provider?

Copyright © 2023 Womble Bond Dickinson (US) LLP All Rights Reserved.

With the US Copyright Office (USCO) continuing their stance that protection only extends to human authorship, what will this mean for artificial intelligence (AI)-generated works — and artists — in the future?

Almost overnight, the limited field of Machine Learning and AI has become nearly as accessible to use as a search engine. Apps like Midjourney, Open AI, ChatGPT, and DALL-E 2, allow users to input a prompt into these systems and a bot will generate virtually whatever the user asks for. Microsoft recently announced its decision to make a multibillion-dollar investment in OpenAI, betting on the hottest technology in the industry to transform internet as we know it.[1]

However, with accessibility of this technology growing, questions of authorship and copyright ownership are rising as well. There remain multiple open questions, such as: who is the author of the work — the user, the bot, or the software that produces it? And where is this new generative technology pulling information from?

AI and Contested Copyrights

As groundbreaking as these products are, there has been ample backlash regarding copyright infringement and artistic expression. The stock image company, Getty Images, is suing Stability AI, an artificial intelligence art tool behind Stable Diffusion. Getty Images alleges that Stability AI did not seek out a license from Getty Images to train its system. Although the founder of Stability AI argues that art makes up 0.1% of the dataset and is only created when called by the user’s prompt. In contrast, Shutterstock, one of Getty Images largest competitors, has taken an alternative approach and instead partnered with Open AI with plans to compensate artists for their contributions.

Artists and image suppliers are not the only ones unhappy about the popularity of machine learning.  Creators of open-source code have targeted Microsoft and its subsidiary GitHub, along with OpenAI,  in a proposed class-action lawsuit. The lawsuit alleges that the creation of AI-powered coding assistant GitHub Copilot is relying on software piracy on an enormous scale. Further, the complaint claims that GitHub relies on copyrighted code with no attribution and no licenses. This could be the first class-action lawsuit challenging the training and output of AI systems. Whether artists, image companies, and open-source coders choose to embrace or fight the wave of machine learning,  the question of authorship and ownership is still up for debate.

The USCO made clear last year that the copyright act only applies to human authorship; however they have recently signaled that in 2023 the office will focus on the legal grey areas surrounding the copyrightability of works generated in conjunction with AI. The USCO denied multiple applications to protect AI authored works previously, stating that the “human authorship” element was lacking. In pointing to previous decisions, such as the 2018 decision that a monkey taking a selfie could not sue for copyright infringement, the USCO reiterated that “non-human expression is ineligible for copyright protection.” While the agency is standing by its conclusion that works cannot be registered if it is exclusively created by an AI, the office is considering the issue of copyright registration for works co-created by humans and AI.

Patent Complexities  

The US Patent and Trademark Office (USPTO) will have to rethink fundamental patent policies with the rise of sophisticated AI systems as well. As the USPTO has yet to speak on the issue, experts are speculating alternative routes that the office could choose to take: declaring AI inventions unpatentable, which could lead to disputes and hinder the incentive to promote innovation, or concluding that the use of AI should not render otherwise patentable inventions unpatentable, but would lead to complex questions of inventorship. The latter route would require the USPTO to rethink their existing framework of determining inventorship by who conceived the invention.

Takeaway

The degree of human involvement will likely determine whether an AI work can be protected by copyright, and potentially patents. Before incorporating this type of machine learning into your business practices, companies should carefully consider the extent of human input in the AI creation and whether the final work product will be protectable. For example:

  • An apparel company that uses generative AI to create a design for new fabric may not have a protectable copyright in the resulting fabric design.

  • An advertising agency that uses generative AI to develop advertising slogans and a pitch deck for a client may not be able to protect the client from freely utilizing the AI-created work product.

  • A game studio that uses generative AI to create scenes in a video game may not be able to prevent its unlicensed distribution.

  • A logo created for a business endeavor may not be protected unless there are substantial human alterations and input.

  • Code that is edited or created by AI may be able to be freely copied and replicated.

Although the philosophical debate is only beginning regarding what “makes” an artist, 2023 may be a uniquely litigious year defining the extent in which AI artwork is protectable under existing intellectual property laws.


FOOTNOTES

[1] https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.htmlhttps://www.nytimes.com/2023/01/12/technology/microsoft-openai-chatgpt.html

NIST Releases New Framework for Managing AI and Promoting Trustworthy and Responsible Use and Development

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released the Artificial Intelligence Risk Management Framework (“AI RMF 1.0”), which provides a set of guidelines for organizations that design, develop, deploy or use AI to manage its many risks and promote trustworthy and responsible use and development of AI systems.

The AI RMF 1.0 provides guidance as to how organizations may evaluate AI risks (e.g., intellectual property, bias, privacy and cybersecurity) and trustworthiness. The AI RMF 1.0 outlines the characteristics of trustworthy AI systems, which are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy enhanced and fair with their harmful biases managed. It also describes four high-level functions, with associated actions and outcomes to help organizations better understand and manage AI:

  • The Govern function addresses evaluation of AI technologies’ policies, processes and procedures, including their compliance with legal and regulatory requirements and transparent and trustworthy implementation.
  • The Map function provides context for organizations to frame risks relating to AI systems, including AI system impacts and interdependencies.
  • The Measure function uses quantitative, qualitative or mixed-method tools, techniques and methodologies to analyze, benchmark and monitor AI risk and related impacts, including tracking metrics to determine trustworthy characteristics, social impact and human-AI configurations.
  • The Manage function entails allocating risk resources to mapped and measured risks consistent with the Govern function. The Manage function includes determining how to treat risks and develop plans to respond to, recover from and communicate about incidents and events.

NIST released a draft AI Risk Management Framework Playbook to accompany the AI RMF 1.0. NIST plans to release an updated version of the Playbook in the Spring of 2023 and launch a new Trustworthy and Responsible AI Resource Center to help organizations put AI RMF 1.0 into practice. NIST has also provided a Roadmap of its priorities to advance the AI RMF.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.
For more Technology Legal News, click here to visit the National Law Review.

Artists Are Selling AI-Generated Images of Mickey Mouse to Provoke a Test Case

Several artists, frustrated with Artificially Intelligent (AI) image generators skirting copyright laws, are using similar image generators to produce images of Mickey Mouse and other copyrighted characters to challenge the current legal status of AI art. While an artist’s copyright in a work typically vests at the moment of fixation, including the right to prosecute copyright violation, AI-generated work complicates the issue by removing humans from the creative process. Courts have ruled that AI cannot hold copyright, which by corollary also means that AI-generated art sits in the public domain. This legal loophole has angered many professional artists whose art is used to train the AI. Many AI generators, such as Dall-E 2 and Midjourney, can render pieces in the style of a human artist, effectively automating the artist’s job.

Given Disney’s reputation for vigorously defending its intellectual property, these artists hope that monetizing these public-domain AI Mickeys on mugs and T-shirts will prompt a lawsuit. Ironically, provoking and losing a case in this vein may set a favorable precedent for the independent artist community. As AI becomes more advanced, society will likely need to address how increasingly intelligent and powerful AI can complicate and undermine existing law.

Blair Robinson also contributed to this article.

For more Intellectual Property Legal News, click here to visit the National Law Review.

Following the Recent Regulatory Trends, NLRB General Counsel Seeks to Limit Employers’ Use of Artificial Intelligence in the Workplace

On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”).  The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.”  Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.

Under the General Counsel’s proposed framework, an employer can avoid a violation of the Act if it can demonstrate that its business needs require the electronic monitoring and management practices and the practices “outweigh” employees’ Section 7 rights.  Not only must the employer be able to make this showing, it must also demonstrate that it provided the employees advance notice of the technology used, the reason for its use, and how it uses the information obtained.  An employer is relieved of this obligation, according to the General Counsel, only if it can show “special circumstances” justifying “covert use” of the technology.

In GC 23-02, the General Counsel signaled to NLRB Regions that they should scrutinize a broad range of “automated management” and “algorithmic management” technologies, defined as “a diverse set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making.”  Technologies subject to this scrutiny include those used during working time, such as wearable devices, security cameras, and radio-frequency identification badges that record workers’ conversations and track the movements of employees, GPS tracking devices and cameras that keep track of the productivity and location of employees who are out on the road, and computer software that takes screenshots, webcam photos, or audio recordings.  Also subject to scrutiny are technologies employers may use to track employees while they are off duty, such as employer-issued phones and wearable devices, and applications installed on employees’ personal devices.  Finally, the General Counsel noted that an employer that uses such technologies to hire employees, such as online cognitive assessments and reviews of social media, “pry into job applicants’ private lives.”  Thus, these pre-hire practices may also violate of the Act.  Technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02.

GC 23-02 follows the wave of recent federal guidance from the White House, the Equal Employment Opportunity Commission, and local laws that attempt to define, regulate, and monitor the use of artificial intelligence in decision-making capacities.  Like these regulations and guidance, GC 23-02 raises more questions than it answers.  For example, GC 23-02 does not identify the standards for determining whether business needs “outweigh” employees’ Section 7 rights, or what constitutes “special circumstances” that an employer must show to avoid scrutiny under the Act.

While GC 23-02 sets forth the General Counsel’s proposal and thus is not legally binding, it does signal that there will likely be disputes in the future over artificial intelligence in the employment context.

©2022 Epstein Becker & Green, P.C. All rights reserved.

Chamber of Commerce Challenges CFPB Anti-Bias Focus Concerning AI

The end of last month the U.S. Chamber of Commerce, the American Bankers Association and other industry groups (collectively, “Plaintiffs”) filed suit in Texas federal court challenging the Consumer Financial Protection Bureau’s (“CFPB”) update this year to the Unfair, Deceptive, or Abusive Acts or Practices section of its examination manual to include discrimination.  Chamber of Commerce of the United States of America, et al v. Consumer Financial Protection Bureau, et al., Case No. 6:22-cv-00381 (E.D. Tex.)

By way of background, the Consumer Financial Protection Act, which is Title X of the 2010 Dodd-Frank Act (the “Act”), prohibits providers of consumer financial products or services or a service provider from engaging in any unfair, deceptive or abusive act or practice (“UDAAP”).  The Act also provides the CFPB with rulemaking and enforcement authority to “prevent unfair, deceptive, or abusive acts or practices in connection with any transaction with a consumer for a consumer financial product or service, or the offering of a consumer financial product or service.”  See, e.g.https://files.consumerfinance.gov/f/documents/cfpb_unfair-deceptive-abusive-acts-practices-udaaps_procedures.pdf.  In general, the Act provides that an act or practice is unfair when it causes or is likely to cause substantial injury to consumers, which is not reasonably avoidable by consumers, and the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB earlier this spring published revised examination guidelines on unfair, deceptive, or abusive acts and practices, or UDAAPs.  Importantly, this set forth a new position from the CFPB, that discrimination in the provision of consumer financial products and services can itself be a UDAAP.  This was a development that was surprising to many providers of financial products and services.  The CFPB also released an updated exam manual that outlined its position regarding how discriminatory conduct may qualify as a UDAAP in consumer finance.  Additionally, the CFPB in May 2022 additionally published a Consumer Financial Protection Circular to remind the public of creditors’ adverse action notice requirements under the Equal Credit Opportunity Act (“ECOA”).  In the view of the CFPB, creditors cannot use technologies (include algorithmic decision making) if it means they are unable to provide required explanations under the ECOA.

In July 2022, the Chamber and others called on the CFPB to rescind the update to the manual.  This included, among other arguments raised in a white paper supporting their position, that in conflating the concepts of “unfairness” and “discrimination,” the CFPB ignores the Act’s text, structure, and legislative history which discusses “unfairness” and “discrimination” as two separate concepts and defines “unfairness” without mentioning discrimination

The Complaint filed this fall raises three claims under the Administrative Procedure Act (“APA”) in relation to the updated manual as well as others.  The Complaint contends that ultimately it is consumers that will suffer as a result of the CFPB’s new position, as “[t]hese amendments to the manual harm Plaintiffs’ members by imposing heavy compliance costs that are ultimately passed down to consumers in the form of higher prices and reduced access to products.”

The litigation process started by Plaintiffs in this case will be time consuming (a response to the Complaint is not expected from Defendants until December).  In the meantime, entities in the financial sector should be cognizant of the CFPB’s new approach and ensure that their compliance practices appropriately mitigate risk, including in relation to algorithmic decision making and AI.  As always, we will keep you up to date with the latest news on this litigation.

For more Consumer Finance Legal News, click here to visit the National Law Review

© Copyright 2022 Squire Patton Boggs (US) LLP

White House Office of Science and Technology Policy Releases “Blueprint for an AI Bill of Rights”

On October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) unveiled its Blueprint for an AI Bill of Rights, a non-binding set of guidelines for the design, development, and deployment of artificial intelligence (AI) systems.

The Blueprint comprises of five key principles:

  1. The first Principle is to protect individuals from unsafe or ineffective AI systems, and encourages consultation with diverse communities, stakeholders and experts in developing and deploying AI systems, as well as rigorous pre-deployment testing, risk identification and mitigation, and ongoing monitoring of AI systems.

  2. The second Principle seeks to establish safeguards against discriminative results stemming from the use of algorithmic decision-making, and encourages developers of AI systems to take proactive measures to protect individuals and communities from discrimination, including through equity assessments and algorithmic impact assessments in the design and deployment stages.

  3.  The third Principle advocates for building privacy protections into AI systems by default, and encourages AI systems to respect individuals’ decisions regarding the collection, use, access, transfer and deletion of personal information where possible (and where not possible, use default privacy by design safeguards).

  4. The fourth Principle emphasizes the importance of notice and transparency, and encourages developers of AI systems to provide a plain language description of how the system functions and the role of automation in the system, as well as when an algorithmic system is used to make a decision impacting an individual (including when the automated system is not the sole input determining the decision).

  5. The fifth Principle encourages the development of opt-out mechanisms that provide individuals with the option to access a human decisionmaker as an alternative to the use of an AI system.

In 2019, the European Commission published a similar set of automated systems governance principles, called the Ethics Guidelines for Trustworthy AI. The European Parliament currently is in the process of drafting the EU Artificial Intelligence Act, a legally enforceable adaptation of the Commission’s Ethics Guidelines. The current draft of the EU Artificial Intelligence Act requires developers of open-source AI systems to adhere to detailed guidelines on cybersecurity, accuracy, transparency, and data governance, and provides for a private right of action.

For more Technology Legal News, click here to visit the National Law Review.
Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Protection for Voice Actors is Artificial in Today’s Artificial Intelligence World

As we all know, social media has taken the world by storm. Unsurprisingly, it’s had an impact on trademark and copyright law, as the related right of publicity. A recent case involving an actor’s voice being used on the popular app TikTok is emblematic of the time. The actor, Bev Standing, sued TikTok for using her voice, simulated via artificial intelligence (AI) without her permission, to serve as “the female computer-generated voice of TikTok.” The case, which was settled last year, illustrates how the law is being adapted to protect artists’ rights in the face of exploitation through AI, as well as the limits of current law in protecting AI-created works.

Standing explained that she thinks of her voice “as a business,” and she is looking to protect her “product.” Apps like TikTok are taking these “products” and feeding them into an algorithm without the original speaker’s permission, thus impairing creative professionals’ ability to profit in an age of widespread use of the Internet and social media platforms.

Someone’s voice (and aspects of their persona such as their photo, image, or other likeness) can be protected by what’s called the “right of publicity.” That right prevents others from appropriation of one’s persona – but only when appropriation is for commercial purposes. In the TikTok case, there was commercial use, as TikTok was benefiting from use of Standing’s voice to “narrate” its users’ videos (with some user videos apparently involving “foul and offensive language”). In her Complaint, Standing alleged TikTok had violated her right of publicity in using her voice to create the AI voice used by TikTok, and relied upon two other claims:  false designation of origin under the Lanham Act and copyright infringement, as well as related state law claims. The false designation of origin claim turned on whether Standing’s voice was so recognizable that another party’s misappropriation of it could confuse consumers as to whether Standing authorized the Tik Tok use. The copyright infringement claim was possible because Standing created the original voice files for a company that hired her to record Chinese language translations. TikTok subsequently acquired the files but failed to get a license from Standing to use them, as TikTok was legally obligated to do because Standing was the original creator (and therefore copyright owner) of the voice files.

As with other historical technological innovations (one of the earliest being the printing press), the law often plays catch-up, but has proven surprisingly adaptable to new technology. Here, Standing was able to plead three legal theories (six if you count the state statutory and common law unfair competition claims), so it seems artists are well-protected by existing law, at least if they are alleging AI was used to copy their work or persona.

On the other hand, the case for protecting creative expression produced in whole or in part by AI is much more difficult. Some believe AI deserves its own form of copyright, since innovative technology has increasingly made its own music and sounds. Currently, protection for these sounds is limited, since only humans can be identified as authors for the purposes of copyright. Ryan Abott, a professor of law and health science at the University of Surrey in Britain, is attempting to bring a legal case against the U.S. Copyright Office to register a digital artwork made by a computer with AI as its author. The fear, says Abott, is that without rights over these sounds, innovation will be stifled — individuals will not have incentive to create AI works if they cannot protect them from unauthorized exploitation.