Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.

Algorithmic Pricing Agents and Price-Fixing Facilitators: Antitrust Law’s Latest Conundrum

Are machines doing the collaborating that competitors may not?

It is an application of artificial intelligence (“AI”) that many businesses, agencies, legislators, lawyers, and antitrust law enforcers around the world are only beginning to confront. It is also among the top concerns of in-house counsel across industries. Competitors are increasingly setting prices through the use of communal, AI-enhanced algorithms that analyze data that are private, public, or a mix of both.

Allegations in private and public litigation describe “algorithmic price fixing” in which the antitrust violation occurs when competitors feed and access the same database platform and use the same analytical tools. Then, as some allege, the violations continue when competitors agree to the prices produced by the algorithms. Right now, renters and prosecutors are teeing off on the poster child for algorithmic pricing, RealPage Inc., and the many landlords and property managers who use it.

PRIVATE AND PUBLIC LITIGATION

A Nov. 1, 2023 complaint filed by the Washington, DC, Attorney General’s office described RealPage’s offerings this way: “[A] variety of technology-based services to real estate owners and property managers including revenue management products that employ statistical models that use data—including non-public, competitively sensitive data—to estimate supply and demand for multifamily housing that is specific to particular geographic areas and unit types, and then generate a ‘price’ to charge for renting those units that maximizes the landlord’s revenue.”

The complaint alleges that more than 30% of apartments in multifamily buildings and 60% of units in large multifamily buildings nationwide are priced using the RealPage software. In the Washington-Arlington-Alexandria Metropolitan Area that number leaps to more than 90% of units in large buildings. The complaint alleges that landlords have agreed to set their rates using RealPage.

Private actions against RealPage have also been filed in federal courts across the country and have been centralized in multi-district litigation in the Middle District of Tennessee (In re: RealPage, Inc., Rental Software Antitrust Litigation [NO. II], Case No. 3:23-md-3071, MDL No. 3071). The Antitrust Division of the Department of Justice filed a Statement of Interest and a Memorandum in Support in the case urging the court to deny the defendants’ motion to dismiss.

Even before the MDL, RealPage had attracted the Antitrust Division’s attention when the company acquired its largest competitor, Lease Rent Options for $300 million, Axiometrics for $75 million, and On-Site Manager, Inc. for $250 million.

The Antitrust Division has been pursuing the use of algorithms in other industries, including airlines and online retailers. The DOJ and FTC are both studying the issue and reaching out to experts to learn more.

JOURNALISTS AND SENATORS

Additionally, three senators urged DOJ  to investigate RealPage after reporters at ProPublica wrote an investigative report in October 2022. The journalists claim that RealPage’s price-setting software “uses nearby competitors’ nonpublic rent data to feed an algorithm that suggests what landlords should charge for available apartments each day.” ProPublica speculated that the algorithm is enabling landlords to coordinate prices and in the process push rents above competitive levels in violation of the antitrust laws.

Senators Amy Klobuchar (D-MN), Dick Durban (D-IL) and Cory Booker (D-NJ) wrote to the DOJ concerned that the RealPage enables “a cartel to artificially inflate rental rates in multifamily residential buildings.”

Sen. Sherrod Brown (D-OH) also wrote to the Federal Trade Commission with concerns “about collusion in the rental market,” urging the FTC to “review whether rent setting algorithms that analyze rent prices through the use of competitors’ private data … violate antitrust laws.” The Ohio senator specifically mentioned RealPage’s YieldStar and AI Revenue Management programs.

THE EUROPEANS

The European Commission has enacted the Artificial Intelligence Act, which includes provisions on algorithmic pricing, requiring algorithmic pricing systems be transparent, explainable, and non-discriminatory with regard to consumers. Companies that use algorithmic pricing systems will be required to implement compliance procedures, including audits, data governance, and human oversight.

THE LEGAL CONUNDRUM

An essential element of any claimed case of price-fixing under the U.S. antitrust laws is the element of agreement: a plaintiff alleging price-fixing must prove the existence of an agreement between two or more competitors who should be setting their prices independently but aren’t. Consumer harm from collusion occurs when competitors set prices to achieve their maximum joint profit instead of setting prices to maximize individual profits. To condemn algorithmic pricing as collusion, therefore, requires proof of agreement.

It may be difficult for the RealPage plaintiffs to prove that the RealPage’s users agreed among themselves to adhere to any particular price or pricing formula, but not impossible. End users are likely to argue that RealPage’s pricing recommendations are merely aggregate market signals that RealPage is collecting and disseminating. The use of the same information service, their argument will go, does not prove the existence of an agreement for purposes of Section 1 of the Sherman Act.

The parties and courts embroiled in the RealPage litigation are constrained to live under the law as it presently exists, so the solution proposed by Michal Gal, Professor and Director of the Forum on Law and Markets at the University of Haifa, is out of reach. In her 2018 paper, “Algorithms as Illegal Agreements,” Professor Gal confronts the agreement problem when algorithms set prices and concludes that it is time to “rethink our laws and focus on reducing harms to social welfare rather than on what constitutes an agreement.” Academics have been critical of the agreement element of Section 1 for years, but it is unlikely to change anytime soon, even with the added inconvenience it poses where competitors rely on a common vendor of machine-generated pricing recommendations.

Nonetheless, there is some evidence that autonomous machines, just like humans, can learn that collusion allows sellers to charge monopoly prices. In their December 2019 paper, “Artificial Intelligence, Algorithmic Pricing and Collusion,” Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolo, and Sergio Pastorello at the Department of Economics at the University of Bologna showed with computer simulations that machines autonomously analyzing prices can develop collusive strategies “from scratch, engaging in active experimentation and adapting to changing environments.” The authors say indications from their models “suggest that algorithmic collusion is more than a remote theoretical possibility.” They find that “relatively simple [machine learning] pricing algorithms systematically learn to play collusive strategies.” The authors claim to be the first to “clearly document the emergence of collusive strategies among autonomous pricing agents.”

THE AGREEMENT ELEMENT IN THE MACHINE PRICING CASE

For three main reasons, the element of agreement need not be an obstacle to successfully prosecuting a price-fixing claim against competitors that use a common or similar vendor of algorithmic pricing data and software.

First, there is significant precedent for inferring the existence of an agreement among parties that knowingly participate in a collusive arrangement even if they do not directly interact, sometimes imprecisely referred to as a “rimless wheel hub-and-spoke” conspiracy. For example, in Toys “R” Us, Inc. v. F.T.C., 221 F.3d 928 (9th Cir. 2000), the court inferred the necessary concerted action from a series of individual agreements between toy manufacturers and Toys “R” Us in which the manufacturers promised not to sell the toys sold to Toys “R” Us and other toy stores to big box stores in the same packaging. The FTC found that each of the manufacturers entered into the restraint on the condition that the others also did so. The court found that Toys “R” Us had engineered a horizontal boycott against a competitor in violation of Section 1, despite the absence of evidence of any “privity” between the boycotting manufacturers.

The Toys “R” Us case relied on the Supreme Court’s decision in Interstate Circuit v. United States, 306 U.S. 208 (1939), in which movie theater chains sent an identical letter to eight movie studios asking them to restrict secondary runs of certain films. The letter disclosed that each of the eight were receiving the same letter. The Court held that a direct agreement was not a prerequisite for an unlawful conspiracy. “It was enough that, knowing that concerted action was contemplated and invited, the distributors gave their adherence to the scheme and participated in it.”

The analogous issue in the algorithmic pricing scenario is whether the vendor’s end users that their competitors are also end users. If so, the inquiry can consider the agreement element satisfied if the algorithm does, in fact, jointly maximize the end users’ profits.

The second factor overcoming the agreement element is related to the first. Whether software that recommends prices has interacted with the prices set by competitors to achieve joint profit maximization—that is, whether the machines have learned to collude without human intervention—is an empirical question. The same techniques used to uncover machine-learned collusion by simulation can be used to determine the extent of interdependence in historical price setting. If statistical evidence of collusive pricing is available, it is enough that the end users knowingly accepted the offer to set its prices guided by the algorithm. The economics underlying the agreement element in the first place lies in prohibition of joint rather than individual profit maximization, so direct evidence that market participants are jointly profit maximizing should obviate the need for further evidence of agreement.

A third reason the agreement element need not stymie a Section 1 action against defendants engaged in algorithmic pricing is based on the Supreme Court’s decision in American Needle v. NFL, 560 U.S. 183 (2010). In that case the Court made clear that arrangements that remove independent centers of decision-making from the market run afoul of Section 1, if the net effect of the algorithm is to displace individual decision-making with decisions outsourced to a centralized pricing agent, the mechanism should be immaterial.

The rimless wheel of the so-called hub-and-spoke conspiracy is an inadequate analogy because the wheel in these cases does have a rim, i.e., a connection between the conspirators. In the scenarios above in which the courts have found Section 1 liability i) each of the participants knew that its rivals were also entering into the same or similar arrangements, ii) the participants devolved pricing authority away from themselves down to an algorithmic pricing agent, and iii) historical prices could be shown statistically to have exceeded the competitive level in a way consistent with collusive pricing. These elements connect the participants in the scheme, supplying the “rim” to the spokes of the wheel. If the plaintiffs in the RealPage litigation can establish these elements, they will have met their burden of establishing the requisite element of agreement in their Section 1 claim.

What Employers Need to Know about the White House’s Executive Order on AI

President Joe Biden recently issued an executive order devised to establish minimum risk practices for use of generative artificial intelligence (“AI”) with focus on rights and safety of people, with many consequences for employers. Businesses should be aware of these directives to agencies, especially as they may result in new regulations, agency guidance and enforcements that apply to their workers.

Executive Order Requirements Impacting Employers

Specifically, the executive order requires the Department of Justice and federal civil rights offices to coordinate on ‘best practices’ for investigating and prosecuting civil rights violations related to AI. The ‘best practices’ will address: job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and ‘best practices’ are focused on benefitting workers and “preventing employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.”

The executive order also requested a report on AI’s potential labor-market impacts, and to study and identify options for strengthening federal support for workers facing labor disruptions, including from AI. Specifically, the president has directed the Chairman of the Council of Economic Advisers to “prepare and submit a report to the President on the labor-market effects of AI”. In addition, there is a requirement for the Secretary of Labor to submit “a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.” This report will include principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. Employers should expect more direction once this report is completed in April 2024.

Increasing International Employment?

Developing and using generative AI inherently requires skilled workers, which President Biden recognizes. One of the goals of his executive order is to “[u]se existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.” While work visas have been historically difficult for employers to navigate, this executive order may make it easier for US employers to access skilled workers from overseas.

Looking Ahead

In light of the focus of this executive order, employers using AI for recruiting or decisions about applicants (and even current employees) must be aware of the consequences of not putting a human check on the potential bias. Working closely with employment lawyers at Sheppard Mullin and having a multiple checks and balances on recruiting practices are essential when using generative AI.

While this executive order is quite limited in scope, it is only a first step. As these actions are implemented in the coming months, be sure to check back for updates.

For more news on the Impact of the Executive Order on AI for Employers, visit the NLR Communications, Media & Internet section.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

Automating Entertainment: Writers Demand that Studios Not Use AI

When the Writers Guild of America (WGA) came with their list of demands in the strike that has already grinded production on many shows to a halt, chief among them was that the studios agree not to use artificial intelligence to write scripts. Specifically, the Guild had two asks: First, they said that “literary material,” including screenplays and outlines, must be generated by a person and not an AI; Second, they insisted that “source material” not be AI-generated.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios, rejected this proposal. They countered that they would be open to holding annual meetings to discuss advancements in technology. Alarm bells sounded as the WGA saw an existential threat to their survival and that Hollywood was already planning for it.

Writers are often paid at a far lower rate to adapt “source material” such as a comic book or a novel into a screenplay than they are paid to generate original literary material. By using AI tools to generate an outline or first draft of an original story and then enlisting a human to “adapt” it into screenplay, production studios potentially stand to save significantly.

Many industries have embraced the workflow of an AI-generated “first draft” that the human then punches up. And the WGA has said that its writers’ using AI as a tool is acceptable: There would essentially be a robot in the writers’ room with writers supplementing their craft with AI-generated copy, but without AI wholly usurping their jobs.

Everyone appears in agreement that AI could never write the next season of White Lotus or Succession, but lower brow shows could easily be AI aped. Law and Order, for instance, is an often cited example. Not just because it’s formulaic but because AIs are trained on massive data sets of copyrighted content and there are 20 seasons of Law and Order for the AI to ingest. And as AI technology gets more advanced who knows what it could do? Chat GPT was initially released last November and as of writing we’re on GPT-4, a far more powerful version of a platform that is advancing exponentially.

The studios’ push for the expanded use of AI is not without its own risks. The Copyright Office has equivocated somewhat in its determination that AI-generated art is not protectable. In a recent Statement of Policy, the Office said that copyright will only protect aspects of the work that were judged to have been made by the authoring human, resulting in partial protections of AI-generated works. So, the better the AI gets—the more it contributes to cutting out the human writer—the weaker the copyright protection for the studios/networks.

Whether or not AI works infringe the copyrights on the original works is an issue that is currently being litigated in a pair of lawsuits against Stability AI, the startup that created Stable Diffusion (an AI tool with the impressive ability to turn text into images in what some have dubbed the most massive art heist in history). Some have questioned whether the humans who wrote the original episodes would get compensated, and the answer is maybe not. In most cases the scripts were likely works for hire, owned by the studios.

If the studios own the underlying scripts, what happens to the original content if the studios take copyrighted content and put it through a machine that turns out uncopyrightable content? Can you DMCA or sue someone who copies that? As of this writing, there are no clear answers to these questions.

There are legal questions and deeper philosophical questions about making art. As the AI improves and humans become more cyborgian, does the art become indistinguishable? Prolific users of Twitter say they think their thoughts in 280 characters. Perhaps our readers can relate to thinking of their time in 6 minute increments, or .1’s of an hour. Further, perhaps our readers can relate to their industry being threatened by automation. According to a recent report from Goldman Sachs, generative artificial intelligence is putting 44% of legal jobs at risk.

© Copyright 2023 Squire Patton Boggs (US) LLP

For more Employment Legal News, click here to visit the National Law Review.

To AI or Not to AI: U.S. Copyright Office Clarifies Options

The U.S. Copyright Office has weighed in with formal guidance on the copyrightability of works whose generation included the use of artificial intelligence (AI) tools. The good news for technology-oriented human creative types: using AI doesn’t automatically disqualify your work from copyright protection. The bad news for independent-minded AI’s: you still don’t qualify for copyright protection in the United States.

On March 16, 2023, the Copyright Office issued a statement of policy (“Policy”) to clarify its practices for examining and registering works that contain material generated by the use of AI and how copyright law’s human authorship requirements will be applied when AI was used. This Policy is not itself legally binding or a guarantee of a particular outcome, but many copyright applicants may breathe a sigh of relief that the Copyright Office has formally embraced AI-assisted human creativity.

The Policy is just the latest step in an ongoing debate over the copyrightability of machine-assisted products of human creativity. Nearly 150 years ago, the Supreme Court ruled at photographs are copyrightable. See Burrow-Giles Lithographic Company v. Sarony, 111 U.S. 53 (1884). The case involved a photographer’s claim against a lithographer for 85,000 unauthorized copies of a photograph of Oscar Wilde. The photo, Sarony’s “Oscar Wilde No. 18,” is shown below:

Sarony’s “Oscar Wilde No. 18"

The argument against copyright protection was that a photograph is “a reproduction, on paper, of the exact features of some natural object or of some person” and is therefore not a product of human creativity. Id. at 56. The Supreme Court disagreed, ruling that there was sufficient human creativity involved in making the photo, including posing the subject, evoking the desired expression, arranging the clothing and setting, and managing the lighting.

In the mid-1960’s, the Copyright Office rejected a musical composition, Push Button Bertha, that was created by a computer, reasoning that it lacked the “traditional elements of authorship” as they were not created by a human.

In 2018, the U.S. Court of Appeals for the Ninth Circuit ruled that Naruto, a crested macaque (represented by a group of friendly humans), lacked standing under the Copyright Act to hold a copyright in the “monkey selfie” case. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The “monkey selfie” is below:

Monkey Selfie

In February 2022, the Copyright Office rejected a registration (filed by interested humans) for a visual image titled “A Recent Entrance to Paradise,” generated by DABUS, the AI whose claimed fractal-based inventions are the subject of patent applications around the world. DABUS’ image is below:

“A Recent Entrance to Paradise”

Litigation over this rejected application remains pending.

And last month, the Copyright Office ruled that a graphic novel consisting of human-authored text and images generated using the AI tool Midjourney could, as a whole, be copyrighted, but that the images, standing alone, could not. See U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023).

The Copyright Office’s issuing the Policy was necessitated by the rapid and remarkable improvements in generative AI tools over even the past several months. In December 2022, generative AI tool Dall-E generated the following images in response to nothing more than the prompt, “portrait of a musician with a hat in the style of Rembrandt”:

Four portraits generated by AI tool Dall-E from the prompt, "portrait of a musician with a hat in the style of Rembrandt."

If these were human-generated paintings, or even photographs, there is no doubt that they would be copyrightable. But given that all four images were generated in mere seconds, with a single, general prompt from a human user, do they meet the Copyright Office’s criteria for copyrightability? The answer, now, is a clear “no” under the Policy.

However, the Policy opens the door to registering AI-assisted human creativity. The toggle points will be:

“…whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” 

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” 

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This will necessarily be a case-by-case inquiry.” 

See Policy (citations omitted).

Machine-produced authorship alone will continue not to be registerable in the United States, but human selection and arrangement of AI-produced content could lead to a different result according to the Policy. The Policy provides select examples to help guide registrants, who are encouraged to study them carefully. The Policy, combined with near future determinations by the Copyright Office, will be critical to watch in terms of increasing likelihood a registration application will be granted as the Copyright Office continues to assess the impacts of new technology on the creative process. AI tools should not all be viewed as the “same” or fungible. The type of AI and how it is used will be specifically considered by the Copyright Office.

In the short term, the Policy provides some practical guidance to applicants on how to describe the role of AI in a new copyright application, as well as how to amend a prior application in that regard if needed. While some may view the Policy as “new” ground for the Copyright Office, it is consistent with the Copyright Office’s long-standing efforts to protect the fruits of human creativity even if the backdrop (AI technologies) may be “new.”

As a closing note, it bears observing that copyright law in the United Kingdom does permit limited copyright protection for computer-generated works – and has done so since 1988. Even under the U.K. law, substantial questions remain; the author of a computer-generated work is considered to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” See Copyright, Designs and Patents Act (1988) §§ 9(3), 12(7) and 178. In the case of images generated by a consumer’s interaction with a generative AI tool, would that be the consumer or the generative AI provider?

Copyright © 2023 Womble Bond Dickinson (US) LLP All Rights Reserved.

With the US Copyright Office (USCO) continuing their stance that protection only extends to human authorship, what will this mean for artificial intelligence (AI)-generated works — and artists — in the future?

Almost overnight, the limited field of Machine Learning and AI has become nearly as accessible to use as a search engine. Apps like Midjourney, Open AI, ChatGPT, and DALL-E 2, allow users to input a prompt into these systems and a bot will generate virtually whatever the user asks for. Microsoft recently announced its decision to make a multibillion-dollar investment in OpenAI, betting on the hottest technology in the industry to transform internet as we know it.[1]

However, with accessibility of this technology growing, questions of authorship and copyright ownership are rising as well. There remain multiple open questions, such as: who is the author of the work — the user, the bot, or the software that produces it? And where is this new generative technology pulling information from?

AI and Contested Copyrights

As groundbreaking as these products are, there has been ample backlash regarding copyright infringement and artistic expression. The stock image company, Getty Images, is suing Stability AI, an artificial intelligence art tool behind Stable Diffusion. Getty Images alleges that Stability AI did not seek out a license from Getty Images to train its system. Although the founder of Stability AI argues that art makes up 0.1% of the dataset and is only created when called by the user’s prompt. In contrast, Shutterstock, one of Getty Images largest competitors, has taken an alternative approach and instead partnered with Open AI with plans to compensate artists for their contributions.

Artists and image suppliers are not the only ones unhappy about the popularity of machine learning.  Creators of open-source code have targeted Microsoft and its subsidiary GitHub, along with OpenAI,  in a proposed class-action lawsuit. The lawsuit alleges that the creation of AI-powered coding assistant GitHub Copilot is relying on software piracy on an enormous scale. Further, the complaint claims that GitHub relies on copyrighted code with no attribution and no licenses. This could be the first class-action lawsuit challenging the training and output of AI systems. Whether artists, image companies, and open-source coders choose to embrace or fight the wave of machine learning,  the question of authorship and ownership is still up for debate.

The USCO made clear last year that the copyright act only applies to human authorship; however they have recently signaled that in 2023 the office will focus on the legal grey areas surrounding the copyrightability of works generated in conjunction with AI. The USCO denied multiple applications to protect AI authored works previously, stating that the “human authorship” element was lacking. In pointing to previous decisions, such as the 2018 decision that a monkey taking a selfie could not sue for copyright infringement, the USCO reiterated that “non-human expression is ineligible for copyright protection.” While the agency is standing by its conclusion that works cannot be registered if it is exclusively created by an AI, the office is considering the issue of copyright registration for works co-created by humans and AI.

Patent Complexities  

The US Patent and Trademark Office (USPTO) will have to rethink fundamental patent policies with the rise of sophisticated AI systems as well. As the USPTO has yet to speak on the issue, experts are speculating alternative routes that the office could choose to take: declaring AI inventions unpatentable, which could lead to disputes and hinder the incentive to promote innovation, or concluding that the use of AI should not render otherwise patentable inventions unpatentable, but would lead to complex questions of inventorship. The latter route would require the USPTO to rethink their existing framework of determining inventorship by who conceived the invention.

Takeaway

The degree of human involvement will likely determine whether an AI work can be protected by copyright, and potentially patents. Before incorporating this type of machine learning into your business practices, companies should carefully consider the extent of human input in the AI creation and whether the final work product will be protectable. For example:

  • An apparel company that uses generative AI to create a design for new fabric may not have a protectable copyright in the resulting fabric design.

  • An advertising agency that uses generative AI to develop advertising slogans and a pitch deck for a client may not be able to protect the client from freely utilizing the AI-created work product.

  • A game studio that uses generative AI to create scenes in a video game may not be able to prevent its unlicensed distribution.

  • A logo created for a business endeavor may not be protected unless there are substantial human alterations and input.

  • Code that is edited or created by AI may be able to be freely copied and replicated.

Although the philosophical debate is only beginning regarding what “makes” an artist, 2023 may be a uniquely litigious year defining the extent in which AI artwork is protectable under existing intellectual property laws.


FOOTNOTES

[1] https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.htmlhttps://www.nytimes.com/2023/01/12/technology/microsoft-openai-chatgpt.html

NIST Releases New Framework for Managing AI and Promoting Trustworthy and Responsible Use and Development

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released the Artificial Intelligence Risk Management Framework (“AI RMF 1.0”), which provides a set of guidelines for organizations that design, develop, deploy or use AI to manage its many risks and promote trustworthy and responsible use and development of AI systems.

The AI RMF 1.0 provides guidance as to how organizations may evaluate AI risks (e.g., intellectual property, bias, privacy and cybersecurity) and trustworthiness. The AI RMF 1.0 outlines the characteristics of trustworthy AI systems, which are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy enhanced and fair with their harmful biases managed. It also describes four high-level functions, with associated actions and outcomes to help organizations better understand and manage AI:

  • The Govern function addresses evaluation of AI technologies’ policies, processes and procedures, including their compliance with legal and regulatory requirements and transparent and trustworthy implementation.
  • The Map function provides context for organizations to frame risks relating to AI systems, including AI system impacts and interdependencies.
  • The Measure function uses quantitative, qualitative or mixed-method tools, techniques and methodologies to analyze, benchmark and monitor AI risk and related impacts, including tracking metrics to determine trustworthy characteristics, social impact and human-AI configurations.
  • The Manage function entails allocating risk resources to mapped and measured risks consistent with the Govern function. The Manage function includes determining how to treat risks and develop plans to respond to, recover from and communicate about incidents and events.

NIST released a draft AI Risk Management Framework Playbook to accompany the AI RMF 1.0. NIST plans to release an updated version of the Playbook in the Spring of 2023 and launch a new Trustworthy and Responsible AI Resource Center to help organizations put AI RMF 1.0 into practice. NIST has also provided a Roadmap of its priorities to advance the AI RMF.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.
For more Technology Legal News, click here to visit the National Law Review.