A Lawyer’s Guide to Understanding AI Hallucinations in a Closed System

Understanding Artificial Intelligence (AI) and the possibility of hallucinations in a closed system is necessary for the use of any such technology by a lawyer. AI has made significant strides in recent years, demonstrating remarkable capabilities in various fields, from natural language processing to large language models to generative AI. Despite these advancements, AI systems can sometimes produce outputs that are unexpectedly inaccurate or even nonsensical – a phenomenon often referred to as “hallucinations.” Understanding why these hallucinations occur, especially in a closed systems, is crucial for improving AI reliability in the practice of law.

What are AI Hallucinations
AI hallucinations are instances where AI systems generate information that seems plausible but is incorrect or entirely fabricated. These hallucinations can manifest in various forms, such as incorrect responses to prompt, fabricated case details, false medical analysis or even imagined elements in an image.

The Nature of Closed Systems
A closed system in AI refers to a context where the AI operates with a fixed dataset and pre-defined parameters, without real-time interaction or external updates. In the area of legal practice this can include environments or legal AI tools which rely upon a selected universe of information from which to access such information as a case file database, saved case specific medical records, discovery responses, deposition transcripts and pleadings.

Causes of AI Hallucinations in Closed Systems
Closed systems, as opposed to open facing AI which can access the internet, rely entirely on the data they were trained on. If the data is incomplete, biased, or not representative of the real world the AI may fill gaps in its knowledge with incorrect information. This is particularly problematic when the AI encounters scenarios not-well presented in its training data. Similarly, if an AI tool is used incorrectly by way of misused data prompts, a closed system could result in incorrect or nonsensical outputs.

Overfitting
Overfitting occurs when the AI model learns the noise and peculiarities in the training data rather than the underlying patterns. In a closed system, where the training data can be limited and static, the model might generate outputs based on these peculiarities, leading to hallucinations when faced with new or slightly different inputs.

Extrapolation Error
AI models can generalize from their training data to handle new inputs. In a closed system, the lack of continuous learning and updated data may cause the model to make inaccurate extrapolations. For example, a language model might generate plausible sounding but factually incorrect information based upon incomplete context.

Implication of Hallucination for lawyers
For lawyers, AI hallucinations can have serious implications. Relying on AI- generated content without verification could possibly lead to the dissemination or reliance upon false information, which can grievously effect both a client and the lawyer. Lawyers have a duty to provide accurate and reliable advise, information and court filings. Using AI tools that can possibly produce hallucinations without proper checks could very well breach a lawyer’s ethical duty to her client and such errors could damage a lawyer’s reputation or standing. A lawyer must stay vigilant in her practice to safe guard against hallucinations. A lawyer should always verify any AI generated information against reliable sources and treat AI as an assistant, not a replacement. Attorney oversight of outputs especially in critical areas such as legal research, document drafting and case analysis is an ethical requirement.

Notably, the lawyer’s chose of AI tool is critical. A well vetted closed system allows for the tracing of the origin of output and a lawyer to maintain control over the source materials. In the instance of prompt-based data searches, with multiple task prompts, a comprehensive understanding of how the prompts were designed to be used and the proper use of same is also essential to avoid hallucinations in a closed system. Improper use of the AI tool, even in a closed system designed for legal use, can lead to illogical outputs or hallucinations. A lawyer who wishes to utilize AI tools should stay informed about AI developments and understand the limitations and capabilities of the tools used. Regular training and updates can provide a more effective use of AI tools and help to safeguard against hallucinations.

Take Away
AI hallucinations present a unique challenge for the legal profession, but with careful tool vetting, management and training a lawyer can safeguard against false outputs. By understanding the nature of hallucinations and their origins, implementing robust verification processes and maintaining human oversight, lawyers can harness the power of AI while upholding their commitment to accuracy and ethical practice.

Confused About the FCC’s New One-to-One Consent Rules– You’re Not Alone. Here Are Some FAQs Answered For YOU!

Heard a lot about what folks are concerned about in the industry. Still seems to be a lot of confusion about it. So let me help with some answers to critical questions.

None of this is legal advice. Absolutely critical you hire a lawyer–AND A GOOD ONE–to assist you here. But this should help orient.

What is the new FCC One-to-One Ruling?

The FCC’s one-to-one ruling is a new federal regulation that alters the TCPA’s express written consent definition in a manner that requires consumers to select each “seller”–that is the ultimate good or service provider–the consumer chooses to receive calls from individually.

The ruling also limits the scope of consent to matters “logically and topically” related to the transaction that lead to consent.

Under the TCPA express written consent is required for any call that is made using regulated technology, which includes autodialers (ATDS), prerecorded or artificial voice calls, AI voice calls, and any form of outbound IVR or voicemail technology (including ringless) using prerecorded or artifical voice messages.

Why Does the FCC’s New One-to-One Ruling Matter?

Currently online webforms and comparison shopping websites are used to generate “leads” for direct to consumer marketers, insurance agents, real estate agents, and product sellers in numerous verticals.

Millions of leads a month are sold by tens of thousands of lead generation websites, leading to hundreds of millions of regulated marketing calls by businesses that rely on these websites to provide “leads”–consumers interested in hearing about their goods or services.

Prior to the new one-to-one ruling website operators were free to include partner pages that linked thousands of companies the consumer might be providing consent to receive calls from. And fine-print disclosures might allow a consumer to receive calls from business selling products unrelated to the consumer’s request. (For instance a website offering information about a home for sale might include fine print allowing the consumer’s data to be sold to a mortgage lender or insurance broker to receive calls.)

The new one-to-one rule stop these practices and requires website operators to specifically identify each good or service provider that might be contacting the consumer and requires the consumer to select each such provider on a one by one basis in order for consent to be valid.

Will the FCC’s One-to-One Ruling Impact Me?

If you are buying or selling leads, YES this ruling will effect you.

If you are a BPO or call center that relies on leads– YES this ruling will effect you.

If you are a CPaaS or communication platform–YES this ruling will effect you.

If you are a telecom carrier–YES this ruling will effect you.

If you are lead gen platform or service provider–YES this ruling will effect you.

If you generate first-party leads–Yes this ruling will effect you.

When Does the Rule Go Into Effect?

The ruling applies to all calls made in reliance on leads beginning January 27, 2025.

However, the ruling applies regardless of the date the lead was generated. So compliance efforts need to begin early so as to assure a pipeline of available leads to contact on that date.

In other words, all leads NOT in compliance with the FCC’s one-to-one rule CANNOT be called beginning January 27, 2025.

What Do I have to Do to Comply?

Three things:

i) Comply with the rather complex, but navigable new one-to-one rule paradigm. (The Troutman Amin Fifteen is a handy checklist to assist you);

ii) Assure the lead is being captured in a manner that is “logically and topically” related to the calls that will be placed; and

iii) Assure the caller has possession of the consent record before the call is made.

The Privacy Patchwork: Beyond US State “Comprehensive” Laws

We’ve cautioned before about the danger of thinking only about US state “comprehensive” laws when looking to legal privacy and data security obligations in the United States. We’ve also mentioned that the US has a patchwork of privacy laws. That patchwork is found to a certain extent outside of the US as well. What laws exist in the patchwork that relate to a company’s activities?

There are laws that apply when companies host websites, including the most well-known, the California Privacy Protection Act (CalOPPA). It has been in effect since July 2004, thus predating COPPA by 14 years. Then there are laws the apply if a company is collecting and using biometric identifiers, like Illinois’ Biometric Information Privacy Act.

Companies are subject to specific laws both in the US and elsewhere when engaging in digital communications. These laws include the US federal laws TCPA and TCFAPA, as well as CAN-SPAM. Digital communication laws exist in countries as wide ranging as Australia, Canada, Morocco, and many others. Then we have laws that apply when collecting information during a credit card transaction, like the Song Beverly Credit Card Act (California).

Putting It Into Practice: When assessing your company’s obligations under privacy and data security laws, keep activity specific privacy laws in mind. Depending on what you are doing, and in what jurisdictions, you may have more obligations to address than simply those found in comprehensive privacy laws.

Cybersecurity Crunch: Building Strong Data Security Programs with Limited Resources – Insights from Tech and Financial Services Sectors

In today’s digital age, cybersecurity has become a paramount concern for executives navigating the complexities of their corporate ecosystems. With resources often limited and the ever-present threat of cyberattacks, establishing clear priorities is essential to safeguarding company assets.

Building the right team of security experts is a critical step in this process, ensuring that the organization is well-equipped to fend off potential threats. Equally important is securing buy-in from all stakeholders, as a unified approach to cybersecurity fosters a robust defense mechanism across all levels of the company.Digit

This insider’s look at cybersecurity will delve into the strategic imperatives for companies aiming to protect their digital frontiers effectively.

Where Do You Start on Cybersecurity?
Resources are limited, and pressures on corporate security teams are growing, both from internal stakeholders and outside threats. But resources to do the job aren’t. So how can companies protect themselves in real world environment, where finances, employee time, and other resources are finite?

“You really have to understand what your company is in the business of doing,” Wilson said. “Every business will have different needs. Their risk tolerances will be different.”

“You really have to understand what your company is in the business of doing. Every business will have different needs. Their risk tolerances will be different.”

BRIAN WILSON, CHIEF INFORMATION SECURITY OFFICER, SAS
For example, Tuttle said in the manufacturing sector, digital assets and data have become increasingly important in recent years. The physical product no longer is the end-all, be-all of the company’s success.

For cybersecurity professionals, this new reality leads to challenges and tough choices. Having a perfect cybersecurity system isn’t possible—not for a company doing business in a modern, digital world. Tuttle said, “If we’re going to enable this business to grow, we’re going to have to be forward-thinking.”

That means setting priorities for cybersecurity. Inskeep, who previously worked in cybersecurity for one of the world’s largest financial services institutions, said multi-factor authentication and controlling access is a good starting point, particularly against phishing and ransomware attacks. Also, he said companies need good back-up systems that enable them to recover lost data as well as robust incident response plans.

“Bad things are going to happen,” Wilson said. “You need to have logs and SIEMs to tell a story.”

Tuttle said one challenge in implementing an incident response plan is engaging team members who aren’t on the front lines of cybersecurity. “They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right,” she said. “They need to be thinking, ‘What should I be looking for and what’s my response?’”

“They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right. They need to be thinking, ‘What should I be looking for and what’s my response?’”

LISA TUTTLE, CHIEF INFORMATION SECURITY OFFICER, SPX TECHNOLOGIES
Wilson said tabletop exercises and security awareness training “are a good feedback loop to have to make sure you’re including the right people. They have to know what to do when something bad happens.”

Building a Security Team
Hiring and maintaining good people in a harrowing field can be a challenge. Companies should leverage their external and internal networks to find data privacy and cybersecurity team members.

Wilson said SAS uses an intern program to help ensure they have trained professionals already in-house. He also said a company’s Help Desk can be a good source of talent.

Remote work also allows companies to cast a wider net for hiring employees. The challenge becomes keeping remote workers engaged, and companies should consider how they can make these far-flung team members feel part of the team.

Inskeep said burnout is a problem in the cybersecurity field. “It’s a job that can feel overwhelming sometimes,” he said. “Interacting with people and protecting them from that burnout has become more critical than ever.”

“It’s a job that can feel overwhelming sometimes. Interacting with people and protecting them from that burnout has become more critical than ever.”

TODD INSKEEP, FOUNDER AND CYBERSECURITY ADVISOR, INCOVATE SOLUTIONS
Weighing Levels of Compliance
The first step, Claypoole said, is understanding the compliance obligations the company faces. These obligations include both regulatory requirements (which are tightening) as well as contract terms from customers.

“For a business, that can be scary, because your business may be agreeing to contract terms with customers and they aren’t asking you about the security requirements in those contracts,” Wilson said.

The panel also noted that “compliance” and “security” aren’t the same thing. Compliance is a minimum set of standards that must be met, while security is a more wide-reaching goal.

But company leaders must realize they can’t have a perfect cybersecurity system, even if they could afford it. It’s important to identify priorities—including which operations are the most important to the company and which would be most disruptive if they went offline.

Wilson noted that global privacy regulations are increasing and becoming stricter every year. In addition, federal officials have taken criminal action against CSOs in recent years.

“Everybody’s radar is kind of up,” Tuttle said. The increasingly compliance pressure also means it’s important for cybersecurity teams to work collaboratively with other departments, rather than making key decisions in a vacuum. Inskeep said such decisions need to be carefully documented as well.

“If you get to a place where you are being investigated, you need your own lawyer,” Claypoole said.

“If you get to a place where you are being investigated, you need your own lawyer.”

TED CLAYPOOLE, PARTNER, WOMBLE BOND DICKINSON
Cyberinsurance is another consideration for data privacy teams, but it can help Chief Security Officers make the case for more resources (both financial and work hours). Inskeep said cyberinsurance questions also can help companies identify areas of risks and where they need to prioritize their efforts. Such priorities can change, and he said companies need to have a committee or some other mechanism to regularly review and update cybersecurity priorities.

Wilson said one positive change he’s seen is that top executives now understand the importance of cybersecurity and are more willing to include cybersecurity team members in the up-front decision-making process.

Bringing in Outside Expertise
Consultants and vendors can be helpful to a cybersecurity team, particularly for smaller teams. Companies can move certain functions to third-party consultants, allowing their own teams to focus on core priorities.

“If we don’t have that internal expertise, that’s a situation where we’d call in third-party resources,” Wilson said.

Bringing in outside professionals also can help a company keep up with new trends and new technologies.

Ultimately, a proactive and well-coordinated cybersecurity strategy is indispensable for safeguarding the digital landscape of modern enterprises. With an ever-evolving threat landscape, companies must be agile in their approach and continuously review and update their security measures. At the core of any effective cybersecurity plan is a comprehensive risk management framework that identifies potential vulnerabilities and outlines steps to mitigate their impact. This framework should also include incident response protocols to minimize the damage in case of a cyberattack.

In addition to technology and processes, the human element is crucial in cybersecurity. Employees must be educated on how to spot potential threats, such as phishing emails or suspicious links, and know what steps to take if they encounter them.

Key Takeaways:
What are the biggest risk areas and how do you minimize those risks?
Know your external cyber footprint. This is what attackers see and will target.
Align with your team, your peers, and your executive staff.
Prioritize implementing multi-factor authentication and controlling access to protect against common threats like phishing and ransomware.
Develop reliable backup systems and robust incident response plans to recover lost data and respond quickly to cyber incidents.
Engage team members who are not on the front lines of cybersecurity to ensure quick identification and escalation of potential threats.
Conduct tabletop exercises and security awareness training regularly.
Leverage intern programs and help desk personnel to build a strong cybersecurity team internally.
Explore remote work options to widen the talent pool for hiring cybersecurity professionals, while keeping remote workers engaged and integrated.
Balance regulatory compliance with overall security goals, understanding that compliance is just a minimum standard.

Copyright © 2024 Womble Bond Dickinson (US) LLP All Rights Reserved.

by: Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

For more on Cybersecurity, visit the Communications Media Internet section.

American Privacy Rights Act Advances with Significant Revisions

On May 23, 2024, the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce approved a revised draft of the American Privacy Rights Act (“APRA”), which was released just 36 hours before the markup session. With the subcommittee’s approval, the APRA will now advance to full committee consideration. The revised draft includes several notable changes from the initial discussion draft, including:

  • New Section on COPPA 2.0 – the revised APRA draft includes the Children’s Online Privacy Protection Act (COPPA 2.0) under Title II, which differs to a certain degree from the COPPA 2.0 proposal currently before the Senate (e.g., removal of the revised “actual knowledge” standard; removal of applicability to teens over age 12 and under age 17).
  • New Section on Privacy By Design – the revised APRA draft includes a new dedicated section on privacy by design. This section requires covered entities, service providers and third parties to establish, implement, and maintain reasonable policies, practices and procedures that identify, assess and mitigate privacy risks related to their products and services during the design, development and implementation stages, including risks to covered minors.
  • Expansion of Public Research Permitted Purpose – as an exception to the general data minimization obligation, the revised APRA draft adds another permissible purpose for processing data for public or peer-reviewed scientific, historical, or statistical research projects. These research projects must be in the public interest and comply with all relevant laws and regulations. If the research involves transferring sensitive covered data, the revised APRA draft requires the affirmative express consent of the affected individuals.
  • Expanded Obligations for Data Brokers – the revised APRA draft expands obligations for data brokers by requiring them to include a mechanism for individuals to submit a “Delete My Data” request. This mechanism, similar to the California Delete Act, requires data brokers to delete all covered data related to an individual that they did not collect directly from that individual, if the individual so requests.
  • Changes to Algorithmic Impact Assessments – while the initial APRA draft required large data holders to conduct and report a covered algorithmic impact assessment to the FTC if they used a covered algorithm posing a consequential risk of harm to individuals, the revised APRA requires such impact assessments for covered algorithms to make a “consequential decision.” The revised draft also allows large data holders to use certified independent auditors to conduct the impact assessments, directs the reporting mechanism to NIST instead of the FTC, and expands requirements related to algorithm design evaluations.
  • Consequential Decision Opt-Out – while the initial APRA draft allowed individuals to invoke an opt-out right against covered entities’ use of a covered algorithm making or facilitating a consequential decision, the revised draft now also allows individuals to request that consequential decisions be made by a human.
  • New and/or Revised Definitions – the revised APRA draft’s definition section includes new terms, such as “contextual advertising” and “first party advertising.”. The revised APRA draft also redefines certain terms, including “covered algorithm,” “sensitive covered data,” “small business” and “targeted advertising.”

Mandatory Cybersecurity Incident Reporting: The Dawn of a New Era for Businesses

A significant shift in cybersecurity compliance is on the horizon, and businesses need to prepare. Starting in 2024, organizations will face new requirements to report cybersecurity incidents and ransomware payments to the federal government. This change stems from the U.S. Department of Homeland Security’s (DHS) Cybersecurity Infrastructure and Security Agency (CISA) issuing a Notice of Proposed Rulemaking (NPRM) on April 4, 2024. This notice aims to enforce the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Essentially, this means that “covered entities” must report specific cyber incidents and ransom payments to CISA within defined timeframes.

Background

Back in March 2022, President Joe Biden signed CIRCIA into law. This was a big step towards improving America’s cybersecurity. The law requires CISA to create and enforce regulations mandating that covered entities report cyber incidents and ransom payments. The goal is to help CISA quickly assist victims, analyze trends across different sectors, and share crucial information with network defenders to prevent other potential attacks.

The proposed rule is open for public comments until July 3, 2024. After this period, CISA has 18 months to finalize the rule, with an expected implementation date around October 4, 2025. The rule should be effective in early 2026. This document provides an overview of the NPRM, highlighting its key points from the detailed Federal Register notice.

Cyber Incident Reporting Initiatives

CIRCIA includes several key requirements for mandatory cyber incident reporting:

  • Cyber Incident Reporting Requirements – CIRCIA mandates that CISA develop regulations requiring covered entities to report any covered cyber incidents within 72 hours from the time the entity reasonably believes the incident occurred.
  • Federal Incident Report Sharing – Any federal entity receiving a report on a cyber incident after the final rule’s effective date must share that report with CISA within 24 hours. CISA will also need to make information received under CIRCIA available to certain federal agencies within the same timeframe.
  • Cyber Incident Reporting Council – The Department of Homeland Security (DHS) must establish and chair an intergovernmental Cyber Incident Reporting Council to coordinate, deconflict, and harmonize federal incident reporting requirements.

Ransomware Initiatives

CIRCIA also authorizes or mandates several initiatives to combat ransomware:

  • Ransom Payment Reporting Requirements – CISA must develop regulations requiring covered entities to report to CISA within 24 hours of making any ransom payments due to a ransomware attack. These reports must be shared with federal agencies similarly to cyber incident reports.
  • Ransomware Vulnerability Warning Pilot Program – CISA must establish a pilot program to identify systems vulnerable to ransomware attacks and may notify the owners of these systems.
  • Joint Ransomware Task Force – CISA has announced the launch of the Joint Ransomware Task Force to build on existing efforts to coordinate a nationwide campaign against ransomware attacks. This task force will work closely with the Federal Bureau of Investigation and the Office of the National Cyber Director.

Scope of Applicability

The regulation targets many “covered entities” within critical infrastructure sectors. CISA clarifies that “covered entities” encompass more than just owners and operators of critical infrastructure systems and assets. Entities actively participating in these sectors might be considered “in the sector,” even if they are not critical infrastructure themselves. Entities uncertain about their status are encouraged to contact CISA.

Critical Infrastructure Sectors

CISA’s interpretation includes entities within one of the 16 sectors defined by Presidential Policy Directive 21 (PPD 21). These sectors include Chemical, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare and Public Health, Information Technology, Nuclear Reactors, Materials, and Waste, Transportation Systems, Water and Wastewater Systems.

Covered Entities

CISA aims to include small businesses that own and operate critical infrastructure by setting additional sector-based criteria. The proposed rule applies to organizations falling into one of two categories:

  1. Entities operating within critical infrastructure sectors, except small businesses
  2. Entities in critical infrastructure sectors that meet sector-based criteria, even if they are small businesses

Size-Based Criteria

The size-based criteria use Small Business Administration (SBA) standards, which vary by industry and are based on annual revenue and number of employees. Entities in critical infrastructure sectors exceeding these thresholds are “covered entities.” The SBA standards are updated periodically, so organizations must stay informed about the current thresholds applicable to their industry.

Sector-Based Criteria

The sector-based criteria target essential entities within a sector, regardless of size, based on the potential consequences of disruption. The proposed rule outlines specific criteria for nearly all 16 critical infrastructure sectors. For instance, in the information technology sector, the criteria include:

  • Entities providing IT services for the federal government
  • Entities developing, licensing, or maintaining critical software
  • Manufacturers, vendors, or integrators of operational technology hardware or software
  • Entities involved in election-related information and communications technology

In the healthcare and public health sector, the criteria include:

  • Hospitals with 100 or more beds
  • Critical access hospitals
  • Manufacturers of certain drugs or medical devices

Covered Cyber Incidents

Covered entities must report “covered cyber incidents,” which include significant loss of confidentiality, integrity, or availability of an information system, serious impacts on operational system safety and resiliency, disruption of business or industrial operations, and unauthorized access due to third-party service provider compromises or supply chain breaches.

Significant Incidents

This definition covers substantial cyber incidents regardless of their cause, such as third-party compromises, denial-of-service attacks, and vulnerabilities in open-source code. However, threats or activities responding to owner/operator requests are not included. Substantial incidents include encryption of core systems, exploitation causing extended downtime, and ransomware attacks on industrial control systems.

Reporting Requirements

Covered entities must report cyber incidents to CISA within 72 hours of reasonably believing an incident has occurred. Reports must be submitted via a web-based “CIRCIA Incident Reporting Form” on CISA’s website and include extensive details about the incident and ransom payments.

Report Types and Timelines

  • Covered Cyber Incident Reports within 72 hours of identifying an incident
  • Ransom Payment Reports due to a ransomware attack within 24 hours of payment
  • Joint Covered Cyber Incident and Ransom Payment Reports within 72 hours for ransom payment incidents
  • Supplemental Reports within 24 hours if new information or additional payments arise

Entities must retain data used for reports for at least two years. They can authorize a third party to submit reports on their behalf but remain responsible for compliance.

Exemptions for Similar Reporting

Covered entities may be exempt from CIRCIA reporting if they have already reported to another federal agency, provided an agreement exists between CISA and that agency. This agreement must ensure the reporting requirements are substantially similar, and the agency must share information with CISA. Federal agencies that report to CISA under the Federal Information Security Modernization Act (FISMA) are exempt from CIRCIA reporting.

These agreements are still being developed. Entities reporting to other federal agencies should stay informed about their progress to understand how they will impact their reporting obligations under CIRCIA.

Enforcement and Penalties

The CISA director can make a request for information (RFI) if an entity fails to submit a required report. Non-compliance can lead to civil action or court orders, including penalties such as disbarment and restrictions on future government contracts. False statements in reports may result in criminal penalties.

Information Protection

CIRCIA protects reports and RFI responses, including immunity from enforcement actions based solely on report submissions and protections against legal discovery and use in proceedings. Reports are exempt from Freedom of Information Act (FOIA) disclosures, and entities can designate reports as “commercial, financial, and proprietary information.” Information can be shared with federal agencies for cybersecurity purposes or specific threats.

Business Takeaways

Although the rule will not be effective until late 2025, companies should begin preparing now. Entities should review the proposed rule to determine if they qualify as covered entities and understand the reporting requirements, then adjust their security programs and incident response plans accordingly. Creating a regulatory notification chart can help track various incident reporting obligations. Proactive measures and potential formal comments on the proposed rule can aid in compliance once the rules are finalized.

These steps are designed to guide companies in preparing for CIRCIA, though each company must assess its own needs and procedures within its specific operational, business, and regulatory context.

Listen to this post

Mid-Year Recap: Think Beyond US State Laws!

Much of the focus on US privacy has been US state laws, and the potential of a federal privacy law. This focus can lead one to forget, however, that US privacy and data security law follows a patchwork approach both at a state level and a federal level. “Comprehensive” privacy laws are thus only one piece of the puzzle. There are federal and state privacy and security laws that apply based on a company’s (1) industry (financial services, health care, telecommunications, gaming, etc.), (2) activity (making calls, sending emails, collecting information at point of purchase, etc.), and (3) the type of individual from whom information is being collected (children, students, employees, etc.). There have been developments this year in each of these areas.

On the industry law, there has been activity focused on data brokers, those in the health space, and for those that sell motor vehicles. The FTC has focused on the activities of data brokers this year, beginning the year with a settlement with lead-generation company Response Tree. It also settled with X-Mode Social over the company’s collection and use of sensitive information. There have also been ongoing regulation and scrutiny of companies in the health space, including HHS’s new AI transparency rule. Finally, in this area is a new law in Utah, with a Motor Vehicle Data Protection Act applicable to data systems used by car dealers to house consumer information.

On the activity side, there has been less news, although in this area the “activity” of protecting information (or failing to do so) has continued to receive regulatory focus. This includes the SEC’s new cybersecurity reporting obligations for public companies, as well as minor modifications to Utah’s data breach notification law.

Finally, there have been new laws directed to particular individuals. In particular, laws intended to protect children. These include social media laws in Florida and Utah, effective January 1, 2025 and October 1, 2024 respectively. These are similar to attempts to regulate social media’s collection of information from children in Arkansas, California, Ohio and Texas, but the drafters hope sufficiently different to survive challenges currently being faced by those laws. The FTC is also exploring updates to its decades’ old Children’s Online Privacy Protection Act.

Putting It Into Practice: As we approach the mid-point of the year, now is a good time to look back at privacy developments over the past six months. There have been many developments in the privacy patchwork, and companies may want to take the time now to ensure that their privacy programs have incorporated and addressed those laws’ obligations.

Listen to this post

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

No Arbitration for Lead Buyer: Consent Form Naming Buyer Does Not Give Buyer Right to Enforce Arbitration in Tcpa Class Action

A subsidiary of Move, Inc. bought a data lead from Nations Info, Corp. off of its HudHomesUsa.org website. The subsidiary made an outbound prerecorded call resulting in a TCPA lawsuit against Move. (Fun.)

Move, Inc. moved to compel arbitration arguing that since its subsidiary was named in the consent form–it argued the arbitration clause necessarily covered it because the whole purpose of the clause was to permit parties buying leads from the website to compel TCPA cases to arbitration.

Good argument, but the court disagreed.

In Faucett v. Move, Inc. 2024 WL 2106727 (C.D. Cal. April 22, 2024) the Court refused to enforce the arbitration clause finding that Move, Inc. was not a signatory to the agreement and could not enforce it under any theory.

Most interestingly, Move argued that a motivating purpose behind the publisher’s arbitration clause was to benefit Move because Nations Info listed Opcity —Move’s subsidiary—in the Consent Form as a company that could send marketing messages to Hud’s users. (Mot. 1–2.)

But the Court found the Terms and Consent Form were two different documents, and accepting the one did not change the scope of the other.

The Court also found equitable estoppel did not apply because Plaintiff was not moving to enforce the terms of the agreement. Quite the contrary, Plaintiff denied any arbitration (or consent) agreement existed.

So Move is stuck.

Pretty clear lesson here: lead buyers should make sure the arbitration provisions on any website they are buying leads from includes third-parties (like the buyer) as a party to the clause. Failing to do so may leave the lead buyer stuck without the ability to enforce the provision–and that can lead to a massive class action with potential exposure in the hundreds of millions or billions of dollars.

Lead buyers are already forcing sellers to revise their flows in light of the FCC’s new one to one consent rules. So now would be a GREAT time to revisit requirements around arbitration provisions as well.

Something to think about.