Privacy Tip #382 – Beware of Fake Package Delivery Scams During Holiday Season

There are lots of package deliveries this time of year. When shopping online, companies are great about telling you when to expect the delivery of your purchase. Fraudsters know this and prey on unsuspecting victims especially during this time of year.

Scammers send smishing texts (smishing is just like phishing, but through a text), that embeds malicious code into a link in the text that can infect your phone or try to get victims to provide personal information or financial information.

It is such a problem, that the Federal Trade Commission (FTC) recently issued an Alert to provide tips to avoid these scams.

The tips include:

What to do

  • If you get a message about an unexpected package delivery that tells you to click on a link for some reason, don’t click.
  • If you think the message might be legitimate, contact the shipping company using a phone number or website you know is real. Don’t use the information in the message.
  • If you think it could be about something you recently ordered, go to the site where you bought the item and look up the shipping and delivery status there.
  • No matter the time of year, it always pays to protect your personal information. Check out these resources to help you weed out spam text messagesphishing emails, and unwanted calls.

These are helpful tips any time of year, but particularly right now.

Chat with Caution: The Growing Data Privacy Compliance and Litigation Risk of Chatbots

In a new wave of privacy litigation, plaintiffs have recently filed dozens of class action lawsuits in state and federal courts, primarily in California, seeking damages for alleged “wiretapping” by companies with public-facing websites. The complaints assert a common theory: that website owners using chatbot functions to engage with customers are violating state wiretapping laws by recording chats and giving service providers access to them, which plaintiffs label “illegal eavesdropping.”

Chatbot wiretapping complaints seek substantial damages from defendants and assert new theories that would dramatically expand the application of state wiretapping laws to customer support functions on business websites.

Although there are compelling reasons why courts should decline to extend wiretapping liability to these contexts, early motions to dismiss have met mixed outcomes. As a result, businesses that use chatbot functions to support customers now face a high-risk litigation environment, with inconsistent court rulings to date, uncertain legal holdings ahead, significant statutory damages exposure, and a rapid uptick in plaintiff activity.

Strict State Wiretapping Laws

Massachusetts and California have some of the most restrictive wiretapping laws in the nation, requiring all parties to consent to a recording, in contrast to the one-party consent required under federal and many state laws. Those two states have been key battlegrounds for plaintiffs attempting to extend state privacy laws to website functions, partly because they provide for significant statutory damages per violation and an award of attorney’s fees.

Other states with wiretapping statutes requiring the consent of all parties include Delaware, Florida, Illinois, Maryland, Montana, Nevada, New Hampshire, Pennsylvania, and Washington. As in Massachusetts and California, litigants in Florida and Pennsylvania have started asserting wiretapping claims based on website functions.

Plaintiffs’ Efforts to Extend State Wiretapping Laws to Chatbot Functions

Chatbot litigation is a product of early favorable rulings in cases targeting other website technologies, refashioned to focus on chat functions. Chatbots allow users to direct inquiries to AI virtual assistants or human customer service representatives. Chatbot functions are often deployed using third-party vendor software, and when chat conversations are recorded, those vendors may be provided access to live recordings or transcripts.

This most recent wave of plaintiffs now claim that recording chat conversations and making them accessible to vendors violates state wiretapping laws, with liability for both the website operator and the vendor. However, there are several reasons why the application of wiretapping laws in this context is inappropriate, and defendants are asserting these legal arguments in early dispositive motion practice with mixed results.

What Businesses Can Do to Address Growing Chatbot Litigation Risk

Despite compelling legal arguments for why these suits should be stopped, businesses with website chat functions should exercise caution to avoid being targeted, as we expect to see chatbot wiretap claims to skyrocket. This litigation risk is present in all two-party consent states, but especially in Massachusetts and California. Companies should beware that they can be targeted in multiple states, even if they do not offer products or services directly to consumers.

In this environment, a review and update of your company’s website for data privacy compliance, including chatbot activities, is advisable to avoid expensive litigation. These measures include:

  • Incorporating clear disclosure language and robust affirmative consent procedures into the website’s chat functions, including specific notification in the function itself that the chatbot is recording and storing communications
  • Expanding website dispute resolution terms, including terms that could reduce the risk of class action litigation and mass arbitration
  • Updating the website’s privacy policy to accurately and clearly explain what data, if any, is recorded, stored, and transmitted to service providers through its chat functions, ideally in a dedicated “chat” section
  • Considering data minimization measures in connection with website chat functions
  • Evaluating third-party software vendors’ compliance history, including due diligence to ensure a complete understanding of how chatbot data is collected, transmitted, stored, and used, and whether the third party’s privacy policies are acceptable

Companies may also want to consider minimizing aspects of their chatbots that have a high annoyance factor – such as blinking “notifications” – to reduce the likelihood of attracting a suit. This list is not comprehensive, and businesses should ensure their legal teams are aware of their website functions and data collection practices.

For more articles on privacy, visit the NLR Communications, Media and Internet section.

Montana Passes 9th Comprehensive Consumer Privacy Law in the U.S.

On May 19, 2023, Montana’s Governor signed Senate Bill 384, the Consumer Data Privacy Act. Montana joins California, Colorado, Connecticut, Indiana, Iowa, Tennessee, Utah, and Virginia in enacting a comprehensive consumer privacy law. The law is scheduled to take effect on October 1, 2024.

When does the law apply?

The law applies to a person who conducts business in the state of Montana and:

  • Controls or processes the personal data of not less than 50,000 consumers (defined as Montana residents), excluding data controlled or processed solely to complete a payment transaction.
  • Controls and processes the personal data of not less than 25,000 consumers and derive more than 25% of gross revenue from the sale of personal data.

Hereafter these covered persons are referred to as controllers.

The following entities are exempt from coverage under the law:

  • Body, authority, board, bureau, commission, district, or agency of this state or any political subdivision of this state;
  • Nonprofit organization;
  • Institution of higher education;
  • National securities association that is registered under 15 U.S.C. 78o-3 of the federal Securities Exchange Act of 1934;
  • A financial institution or an affiliate of a financial institution governed by Title V of the Gramm- Leach-Bliley Act;
  • Covered entity or business associate as defined in the privacy regulations of the federal Health Insurance Portability and Accountability Act (HIPAA);

Who is protected by the law?

Under the law, a protected consumer is defined as an individual who resides in the state of Montana.

However, the term consumer does not include an individual acting in a commercial or employment context or as an employee, owner, director, officer, or contractor of a company partnership, sole proprietorship, nonprofit, or government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with the company, partnership, sole proprietorship, nonprofit, or government agency.

What data is protected by the law?

The statute protects personal data defined as information that is linked or reasonably linkable to an identified or identifiable individual.

There are several exemptions to protected personal data, including for data protected under HIPAA and other federal statutes.

What are the rights of consumers?

Under the new law, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data
  • Access Personal Data processed by a controller
  • Delete personal data
  • Obtain a copy of personal data previously provided to a controller.
  • Opt-out of the processing of the consumer’s personal data for the purpose of targeted advertising, sales of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects.

What obligations do businesses have?

The controller shall comply with requests by a consumer set forth in the statute without undue delay but no later than 45 days after receipt of the request.

If a controller declines to act regarding a consumer’s request, the business shall inform the consumer without undue delay, but no later than 45 days after receipt of the request, of the reason for declining.

The controller shall also conduct and document a data protection assessment for each of their processing activities that present a heightened risk of harm to a consumer.

How is the law enforced?

Under the statute, the state attorney general has exclusive authority to enforce violations of the statute. There is no private right of action under Montana’s statute.

Jackson Lewis P.C. © 2023

For more Privacy Legal News, click here to visit the National Law Review.

Clop Claims Zero-Day Attacks Against 130 Organizations

Russia-linked ransomware gang Clop has claimed that it has attacked over 130 organizations since late January, using a zero-day vulnerability in the GoAnywhere MFT secure file transfer tool, and was successful in stealing data from those organizations. The vulnerability is CVE-2023-0669, which allows attackers to execute remote code execution.

The manufacturer of GoAnywhere MFT notified customers of the vulnerability on February 1, 2023, and issued a patch for the vulnerability on February 7, 2023.

HC3 issued an alert on February 22, 2023, warning the health care sector about Clop targeting healthcare organizations and recommended:

  • Educate and train staff to reduce the risk of social engineering attacks via email and network access.
  • Assess enterprise risk against all potential vulnerabilities and prioritize implementing the security plan with the necessary budget, staff, and tools.
  • Develop a cybersecurity roadmap that everyone in the healthcare organization understands.

Security professionals are recommending that information technology professionals update machines to the latest GoAnywhere version and “stop exposing port 8000 (the internet location of the GoAnywhere MFT admin panel).”

Copyright © 2023 Robinson & Cole LLP. All rights reserved.

Privacy Tip #358 – Bank Failures Give Hackers New Strategy for Attacks

Hackers are always looking for the next opportunity to launch attacks against unsuspecting victims. According to Cybersecurity Diveresearchers at Proofpoint recently observed “a phishing campaign designed to exploit the banking crisis with messages impersonating several cryptocurrencies.”

According to Cybersecurity Dive, cybersecurity firm Arctic Wolf has observed “an uptick in newly registered domains related to SVB since federal regulators took over the bank’s deposits…” and “expects some of those domains to serve as a hub for phishing attacks.”

This is the modus operandi of hackers. They use times of crises, when victims are vulnerable, to launch attacks. Phishing campaigns continue to be one of the top risks to organizations, and following the recent bank failures, everyone should be extra vigilant of urgent financial requests and emails spoofing financial institutions, and take additional measures, through multiple levels of authorization, when conducting financial transactions.

We anticipate increased activity following these recent financial failures attacking individuals and organizations. Communicating the increased risk to employees may be worth consideration.

Copyright © 2023 Robinson & Cole LLP. All rights reserved.

To AI or Not to AI: U.S. Copyright Office Clarifies Options

The U.S. Copyright Office has weighed in with formal guidance on the copyrightability of works whose generation included the use of artificial intelligence (AI) tools. The good news for technology-oriented human creative types: using AI doesn’t automatically disqualify your work from copyright protection. The bad news for independent-minded AI’s: you still don’t qualify for copyright protection in the United States.

On March 16, 2023, the Copyright Office issued a statement of policy (“Policy”) to clarify its practices for examining and registering works that contain material generated by the use of AI and how copyright law’s human authorship requirements will be applied when AI was used. This Policy is not itself legally binding or a guarantee of a particular outcome, but many copyright applicants may breathe a sigh of relief that the Copyright Office has formally embraced AI-assisted human creativity.

The Policy is just the latest step in an ongoing debate over the copyrightability of machine-assisted products of human creativity. Nearly 150 years ago, the Supreme Court ruled at photographs are copyrightable. See Burrow-Giles Lithographic Company v. Sarony, 111 U.S. 53 (1884). The case involved a photographer’s claim against a lithographer for 85,000 unauthorized copies of a photograph of Oscar Wilde. The photo, Sarony’s “Oscar Wilde No. 18,” is shown below:

Sarony’s “Oscar Wilde No. 18"

The argument against copyright protection was that a photograph is “a reproduction, on paper, of the exact features of some natural object or of some person” and is therefore not a product of human creativity. Id. at 56. The Supreme Court disagreed, ruling that there was sufficient human creativity involved in making the photo, including posing the subject, evoking the desired expression, arranging the clothing and setting, and managing the lighting.

In the mid-1960’s, the Copyright Office rejected a musical composition, Push Button Bertha, that was created by a computer, reasoning that it lacked the “traditional elements of authorship” as they were not created by a human.

In 2018, the U.S. Court of Appeals for the Ninth Circuit ruled that Naruto, a crested macaque (represented by a group of friendly humans), lacked standing under the Copyright Act to hold a copyright in the “monkey selfie” case. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The “monkey selfie” is below:

Monkey Selfie

In February 2022, the Copyright Office rejected a registration (filed by interested humans) for a visual image titled “A Recent Entrance to Paradise,” generated by DABUS, the AI whose claimed fractal-based inventions are the subject of patent applications around the world. DABUS’ image is below:

“A Recent Entrance to Paradise”

Litigation over this rejected application remains pending.

And last month, the Copyright Office ruled that a graphic novel consisting of human-authored text and images generated using the AI tool Midjourney could, as a whole, be copyrighted, but that the images, standing alone, could not. See U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023).

The Copyright Office’s issuing the Policy was necessitated by the rapid and remarkable improvements in generative AI tools over even the past several months. In December 2022, generative AI tool Dall-E generated the following images in response to nothing more than the prompt, “portrait of a musician with a hat in the style of Rembrandt”:

Four portraits generated by AI tool Dall-E from the prompt, "portrait of a musician with a hat in the style of Rembrandt."

If these were human-generated paintings, or even photographs, there is no doubt that they would be copyrightable. But given that all four images were generated in mere seconds, with a single, general prompt from a human user, do they meet the Copyright Office’s criteria for copyrightability? The answer, now, is a clear “no” under the Policy.

However, the Policy opens the door to registering AI-assisted human creativity. The toggle points will be:

“…whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” 

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” 

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This will necessarily be a case-by-case inquiry.” 

See Policy (citations omitted).

Machine-produced authorship alone will continue not to be registerable in the United States, but human selection and arrangement of AI-produced content could lead to a different result according to the Policy. The Policy provides select examples to help guide registrants, who are encouraged to study them carefully. The Policy, combined with near future determinations by the Copyright Office, will be critical to watch in terms of increasing likelihood a registration application will be granted as the Copyright Office continues to assess the impacts of new technology on the creative process. AI tools should not all be viewed as the “same” or fungible. The type of AI and how it is used will be specifically considered by the Copyright Office.

In the short term, the Policy provides some practical guidance to applicants on how to describe the role of AI in a new copyright application, as well as how to amend a prior application in that regard if needed. While some may view the Policy as “new” ground for the Copyright Office, it is consistent with the Copyright Office’s long-standing efforts to protect the fruits of human creativity even if the backdrop (AI technologies) may be “new.”

As a closing note, it bears observing that copyright law in the United Kingdom does permit limited copyright protection for computer-generated works – and has done so since 1988. Even under the U.K. law, substantial questions remain; the author of a computer-generated work is considered to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” See Copyright, Designs and Patents Act (1988) §§ 9(3), 12(7) and 178. In the case of images generated by a consumer’s interaction with a generative AI tool, would that be the consumer or the generative AI provider?

Copyright © 2023 Womble Bond Dickinson (US) LLP All Rights Reserved.

Lawyer Bot Short-Circuited by Class Action Alleging Unauthorized Practice of Law

Many of us are wondering how long it will take for ChatGPT, the revolutionary chatbot by OpenAI, to take our jobs. The answer: perhaps, not as soon as we fear!

On March 3, 2023, Chicago law firm Edelson P.C. filed a complaint against DoNotPay, self-described as “the world’s first robot lawyer.” Edelson may have short-circuited the automated barrister’s circuits by filing a lawsuit alleging the unauthorized practice of law.

DoNotPay is marketed as an AI program intended to assist users in need of legal services, but who do not wish to hire a lawyer. The organization was founded in 2015 to assist users in disputing parking tickets. Since then, DoNotPay’s services have expanded significantly. The company’s website offers to help users fight corporations, overcome bureaucratic obstacles, locate cash and “sue anyone.”

In spite of those lofty promises, Edelson’s complaint counters by pointing out certain deficiencies, stating, “[u]nfortunately for its customers, DoNotPay is not actually a robot, a lawyer, or a law firm. DoNotPay does not have a law degree, is not barred in any jurisdiction and is not supervised by any lawyer.”

The suit was brought by plaintiff Jonathan Faridian, who claims to have used DoNotPay for legal drafting projects, demand letters, one small claims court filing and drafting an employment discrimination complaint. Faridian’s complaint explains he was under the impression that he was purchasing legal documents from an attorney, only to later discover that the “substandard” outcomes generated did not comport with his expectations.

When asked for comment, DoNotPay’s representative denied Faridian’s allegations, explaining the organization intends to defend itself “vigorously.”

© 2023 Wilson Elser

Locking Tik Tok? White House Requires Removal of TikTok App from Federal IT

On February 28, the White House issuedmemorandum giving federal employees 30 days to remove the TikTok application from any government devices. This memo is the result of an act passed by Congress that requires the removal of TikTok from any federal information technology. The act responded to concerns that the Chinese government may use data from TikTok for intelligence gathering on Americans.

I’m Not a Federal Employee — Why Does It Matter?

The White House Memo clearly covers all employees of federal agencies. However, it also covers any information technology used by a contractor who is using federal information technology.  As such, if you are a federal contractor using some sort of computer software or technology that is required by the U.S. government, you must remove TikTok in the next 30 days.

The limited exceptions to the removal mandate require federal government approval. The memo mentions national security interests and activities, law enforcement work, and security research as possible exceptions. However, there is a process to apply for an exception – it is not automatic.

Takeaways

Even if you are not a federal employee or a government contractor, this memo would be a good starting place to look back at your company’s social media policies and cell phone use procedures. Do you want TikTok (or any other social media app) on your devices? Many companies have found themselves in PR trouble due to lapses in enforcement of these types of rules. In addition, excessive use of social media in the workplace has been shown to be a drag on productivity.

© 2023 Bradley Arant Boult Cummings LLP

The FTC Announces First Health Breach Notification Rule Enforcement Action

On February 1, the Federal Trade Commission (“FTC”) announced enforcement action for the first time under its Health Breach Notification Rule[1]. The complaint against telehealth and prescription drug discount provider GoodRx Holdings Inc. (“GoodRx”), alleges its failure to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google and other companies.

In a first-of-its-kind proposed order, filed by the Department of Justice on behalf of the FTC, GoodRx will be prohibited from sharing user health data with applicable third parties for advertising purposes, and has agreed to pay a $1.5 million civil penalty for violating the rule. The proposed order must be approved by the federal court to go into effect. The Health Breach Notification Rule requires vendors of personal health records and related entities, which are not covered by the Health Insurance Portability and Accountability Act (HIPAA), to notify consumers and the FTC of unauthorized disclosures. In a September 2021 policy statement, the FTC warned health apps and connected devices that they must comply with the rule.

According to the FTC’s complaint, for years GoodRx violated the FTC Act by sharing sensitive personal health information with advertising companies and platforms—contrary to its privacy promises—and failed to report these unauthorized disclosures as required by the Health Breach Notification Rule.  Specifically, the FTC claims GoodRx shared personal health information with Facebook, Google, Criteo and others. According to the FTC, since at least 2017, GoodRx deceptively promised its users that it would never share personal health information with advertisers or other third parties. GoodRx repeatedly violated this promise by sharing sensitive personal health information—such as including its users’ prescription medications and personal health conditions.

The FTC also alleges GoodRx monetized its users’ personal health information, and used data it shared with Facebook to target GoodRx’s own users with personalized health and medication-specific advertisements on Facebook and Instagram.

The FTC further alleges that GoodRx:

  • Failed to Limit Third-Party Use of Personal Health Information: GoodRx allowed third parties it shared data with to use that information for their own internal purposes, including for research and development or to improve advertising.
  • Misrepresented its HIPAA Compliance: GoodRx displayed a seal at the bottom of its telehealth services homepage falsely suggesting to consumers that it complied with the Health Insurance Portability and Accountability Act of 1996 (HIPAA), a law that sets forth privacy and information security protections for health data.
  • Failed to Implement Policies to Protect Personal Health Information: GoodRx failed to maintain sufficient policies or procedures to protect its users’ personal health information. Until a consumer watchdog publicly revealed GoodRx’s actions in February 2020, GoodRx had no sufficient formal, written, or standard privacy or data sharing policies or compliance programs in place.

In addition to the $1.5 million penalty for violating the rule, the proposed federal court order also prohibits GoodRx from engaging in the deceptive practices outlined in the complaint and requires the company to comply with the Health Breach Notification Rule. To remedy the FTC’s numerous allegations, other provisions of the proposed order against GoodRx also:

  • Prohibit the sharing of health data for advertising: GoodRx will be permanently prohibited from disclosing user health information with applicable third parties for advertising purposes.
  • Require user consent for any other sharing: GoodRx must obtain users’ affirmative express consent before disclosing user health information with applicable third parties for other purposes. The order requires the company to clearly and conspicuously detail the categories of health information that it will disclose to third parties.  It also prohibits the company from using manipulative designs, known as dark patterns, to obtain users’ consent to share the information.
  • Require the company to seek deletion of data: GoodRx must direct third parties to delete the consumer health data that was shared with them and inform consumers about the breaches and the FTC’s enforcement action against the company.
  • Limit Retention of Data: GoodRx will be required to limit how long it can retain personal and health information according to a data retention schedule. It also must publicly post a retention schedule and detail the information it collects and why such data collection is necessary.
  • Implement a Mandated Privacy Program: GoodRx must put in place a comprehensive privacy program that includes strong safeguards to protect consumer data.

© 2023 Dinsmore & Shohl LLP. All rights reserved.

For more Cybersecurity and Privacy Legal News, click here to visit the National Law Review


FOOTNOTES

[1] 16 CFR Part 318

With the US Copyright Office (USCO) continuing their stance that protection only extends to human authorship, what will this mean for artificial intelligence (AI)-generated works — and artists — in the future?

Almost overnight, the limited field of Machine Learning and AI has become nearly as accessible to use as a search engine. Apps like Midjourney, Open AI, ChatGPT, and DALL-E 2, allow users to input a prompt into these systems and a bot will generate virtually whatever the user asks for. Microsoft recently announced its decision to make a multibillion-dollar investment in OpenAI, betting on the hottest technology in the industry to transform internet as we know it.[1]

However, with accessibility of this technology growing, questions of authorship and copyright ownership are rising as well. There remain multiple open questions, such as: who is the author of the work — the user, the bot, or the software that produces it? And where is this new generative technology pulling information from?

AI and Contested Copyrights

As groundbreaking as these products are, there has been ample backlash regarding copyright infringement and artistic expression. The stock image company, Getty Images, is suing Stability AI, an artificial intelligence art tool behind Stable Diffusion. Getty Images alleges that Stability AI did not seek out a license from Getty Images to train its system. Although the founder of Stability AI argues that art makes up 0.1% of the dataset and is only created when called by the user’s prompt. In contrast, Shutterstock, one of Getty Images largest competitors, has taken an alternative approach and instead partnered with Open AI with plans to compensate artists for their contributions.

Artists and image suppliers are not the only ones unhappy about the popularity of machine learning.  Creators of open-source code have targeted Microsoft and its subsidiary GitHub, along with OpenAI,  in a proposed class-action lawsuit. The lawsuit alleges that the creation of AI-powered coding assistant GitHub Copilot is relying on software piracy on an enormous scale. Further, the complaint claims that GitHub relies on copyrighted code with no attribution and no licenses. This could be the first class-action lawsuit challenging the training and output of AI systems. Whether artists, image companies, and open-source coders choose to embrace or fight the wave of machine learning,  the question of authorship and ownership is still up for debate.

The USCO made clear last year that the copyright act only applies to human authorship; however they have recently signaled that in 2023 the office will focus on the legal grey areas surrounding the copyrightability of works generated in conjunction with AI. The USCO denied multiple applications to protect AI authored works previously, stating that the “human authorship” element was lacking. In pointing to previous decisions, such as the 2018 decision that a monkey taking a selfie could not sue for copyright infringement, the USCO reiterated that “non-human expression is ineligible for copyright protection.” While the agency is standing by its conclusion that works cannot be registered if it is exclusively created by an AI, the office is considering the issue of copyright registration for works co-created by humans and AI.

Patent Complexities  

The US Patent and Trademark Office (USPTO) will have to rethink fundamental patent policies with the rise of sophisticated AI systems as well. As the USPTO has yet to speak on the issue, experts are speculating alternative routes that the office could choose to take: declaring AI inventions unpatentable, which could lead to disputes and hinder the incentive to promote innovation, or concluding that the use of AI should not render otherwise patentable inventions unpatentable, but would lead to complex questions of inventorship. The latter route would require the USPTO to rethink their existing framework of determining inventorship by who conceived the invention.

Takeaway

The degree of human involvement will likely determine whether an AI work can be protected by copyright, and potentially patents. Before incorporating this type of machine learning into your business practices, companies should carefully consider the extent of human input in the AI creation and whether the final work product will be protectable. For example:

  • An apparel company that uses generative AI to create a design for new fabric may not have a protectable copyright in the resulting fabric design.

  • An advertising agency that uses generative AI to develop advertising slogans and a pitch deck for a client may not be able to protect the client from freely utilizing the AI-created work product.

  • A game studio that uses generative AI to create scenes in a video game may not be able to prevent its unlicensed distribution.

  • A logo created for a business endeavor may not be protected unless there are substantial human alterations and input.

  • Code that is edited or created by AI may be able to be freely copied and replicated.

Although the philosophical debate is only beginning regarding what “makes” an artist, 2023 may be a uniquely litigious year defining the extent in which AI artwork is protectable under existing intellectual property laws.


FOOTNOTES

[1] https://www.cnn.com/2023/01/23/tech/microsoft-invests-chatgpt-openai/index.htmlhttps://www.nytimes.com/2023/01/12/technology/microsoft-openai-chatgpt.html