Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

The Increasing Role of Cybersecurity Experts in Complex Legal Disputes

The testimonies and guidance of expert witnesses have been known to play a significant role in high-stakes legal matters, whether it be the opinion of a clinical psychiatrist in a homicide case or that of a career IP analyst in a patent infringement trial. However, in today’s highly digital world—where cybercrimes like data breaches and theft of intellectual property are increasingly commonplace—cybersecurity professionals have become some of the most sought-after experts for a broadening range of legal disputes.

Below, we will explore the growing importance of cybersecurity experts to the litigation industry in more depth, including how their insights contribute to case strategies, the challenges of presenting technical and cybersecurity-related arguments in court, the specific qualifications that make an effective expert witness in the field of cybersecurity, and the best method for securing that expertise for your case.

How Cybersecurity Experts Help Shape Legal Strategies

Disputes involving highly complex cybercrimes typically require more technical expertise than most trial teams have on hand, and the contributions of a qualified cybersecurity expert can often be transformative to your ability to better understand the case, uncover critical evidence, and ultimately shape your overall strategy.

For example, in the case of a criminal data breach, defense counsel might seek an expert witness to analyze and evaluate the plaintiff’s existing cybersecurity policies and protective mechanisms at the time of the attack to determine their effectiveness and/or compliance with industry regulations or best practices. Similarly, an expert with in-depth knowledge of evolving data laws, standards, and disclosure requirements will be well-suited to determining a party’s liability in virtually any matter involving the unauthorized access of protected information. Cybersecurity experts are also beneficial during the discovery phase when their experience working with certain systems can assist in potentially uncovering evidence related to a specific attack or breach that may have been initially overlooked.

We have already seen many instances in which the testimony and involvement of cybersecurity experts have impacted the overall direction of a legal dispute. Consider the Coalition for Good Governance, for example, that recently rested its case(Opens an external site in a new window) as the plaintiffs in a six-year battle with the state of Georgia over the security of touchscreen voting machines. Throughout the process, the organization relied heavily on the testimony of multiple cybersecurity experts who claimed they identified vulnerabilities in the state’s voting technology. If these testimonies prove effective, it will not only sway the ruling in the favor of the plaintiffs but also lead to entirely new policies and impact the very way in which Georgia voters cast their ballots as early as this year.

The Challenges of Explaining Cybersecurity in the Courtroom

While there is no denying the growing importance of cybersecurity experts in modern-day disputes, it is also important to note that many challenges still exist in presenting highly technical arguments and/or evidence in a court of law.

Perhaps most notably, there remains a significant gap in both legal and technological language, as well as in the knowledge and understanding of cybersecurity professionals and judges, lawyers, and the juries tasked with parsing particularly dense information. In other words, today’s trial teams need to work carefully with cybersecurity experts to develop communication strategies that adequately illustrate their arguments but do not result in unnecessary confusion or a misunderstanding of the evidence being presented. Visuals are a particularly useful tool in helping both litigators and experts explain complex topics while also engaging decision-makers.

Depending on the nature of the data breach or cybercrime in question, you may be tasked with replicating a digital event to support your specific argument. In many cases, this can be incredibly challenging due to the evolving and multifaceted nature of modern cyberattacks, and it may require extensive resources within the time constraints of a given matter. Thus, it is wise to use every tool at your disposal to boost the power of your team—including custom expert witness sourcing and visual advocacy consultants.

What You Should Look for in a Cybersecurity Expert

Determining the qualifications of a cybersecurity expert is highly dependent on the details of each individual case, making it critical to identify an expert whose experience reflects your precise needs. For example, a digital forensics specialist will offer an entirely different skill set than someone with a background in data privacy regulations and compliance.

Making sure an expert has the relevant professional experience to assess your specific cybersecurity case is only one factor to consider. In addition to verifying education and professional history, you must also assess the expert’s experience in the courtroom and familiarity with relevant legal processes. Similarly, expert witnesses should be evaluated based on their individual personality and communication skills, as they will be tasked with conveying highly technical arguments to an audience that will likely have a difficult time understanding all relevant concepts in the absence of clear, simplified explanations.

Where to Find the Most Qualified Cybersecurity Experts

Safeguarding the success of your client or firm in the digital age starts with the right expertise. You need to be sure your cybersecurity expert is uniquely suited to your case and primed to share critical insights when the stakes are high.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.

President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article 

An Update on the SEC’s Cybersecurity Reporting Rules

As we pass the two-month anniversary of the effectiveness of the U.S. Securities and Exchange Commission’s (“SEC’s”) Form 8-K cybersecurity reporting rules under new Item 1.05, this blog post provides a high-level summary of the filings made to date.

Six companies have now made Item 1.05 Form 8-K filings. Three of these companies also have amended their first Form 8-K filings to provide additional detail regarding subsequent events. The remainder of the filings seem self-contained such that no amendment is necessary, but these companies may amend at a later date. In general, the descriptions of the cybersecurity incidents have been written at a high level and track the requirements of the new rules without much elaboration. It is interesting, but perhaps coincidental, that the filings seem limited to two broad industry groups: technology and financial services. In particular, two of the companies are bank holding companies.

Although several companies have now made reports under the new rules, the sample space may still be too small to draw any firm conclusions or decree what is “market.” That said, several of the companies that have filed an 8-K under Item 1.05 have described incidents and circumstances that do not seem to be financially material to the particular companies. We are aware of companies that have made materiality determinations in the past on the basis of non-financial qualitative factors when impacts of a cyber incident are otherwise quantitatively immaterial, but these situations are more the exception than the rule.

There is also a great deal of variability among the forward-looking statement disclaimers that the companies have included in the filings in terms of specificity and detail. Such a disclaimer is not required in a Form 8-K, but every company to file under Item 1.05 to date has included one. We believe this practice will continue.

Since the effectiveness of the new rules, a handful of companies have filed Form 8-K filings to describe cybersecurity incidents under Item 8.01 (“Other Events”) instead of Item 1.05. These filings have approximated the detail of what is required under Item 1.05. It is not immediately evident why these companies chose Item 8.01, but presumably the companies determined that the events were immaterial such that no filing under Item 1.05 was necessary at the time of filing. Of course, the SEC filing is one piece of a much larger puzzle when a company is working through a cyber incident and related remediation. It remains to be seen how widespread this practice will become. To date, the SEC staff has not publicly released any comment letters critiquing any Form 8-K cyber filing under the new rules, but it is still early in the process. The SEC staff usually (but not always) makes its comment letters and company responses to those comment letters public on the SEC’s EDGAR website no sooner than 20 business days after it has completed its review. With many public companies now also making the new Form 10-K disclosure on cybersecurity, we anticipate the staff will be active in providing guidance and commentary on cybersecurity disclosures in the coming year.

CNN, BREAKING NEWS: CNN Targeted In Massive CIPA Case Involving A NEW Theory Under Section 638.51!

CNN is now facing a massive CIPA class action for violating CIPA Section 638.51 by allegedly installing “Trackers” on its website. In  Lesh v. Cable News Network, Inc, filed in the Superior Court of the State of California by Bursor & Fisher, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of Section 638.51.

More on that in a bit…

As CIPAworld readers know, we predicted the 2023 privacy litigation trends for you.

We warned you of the risky CIPA Chat Box cases.

We broke the news on the evolution of CIPA Web Session recording cases.

We notified you of major CIPA class action lawsuits against some of the world’s largest brands facing millions of dollars in potential exposure.

Now – we are reporting on a lesser-known facet of CIPA – but one that might be even more dangerous for companies using new Internet technologies.

This new focus for plaintiff’s attorneys appears to rely on the theory that website analytic tools are “pen register” or “trap and trace” devices under CIPA §638.51. These allegations also come with a massive $5,000 per violation penalty.

First, let’s delve into the background.

The Evolution of California Invasion of Privacy Act:

We know the California Invasion of Privacy Act is this weird little statute that was enacted decades ago and was designed to prevent ease dropping and wiretapping because — of course back then law enforcements were listening into folks phone calls to find the communist.

638.51 in particular was originally enacted back in the 80s and traditionally, “pen-traps” were employed by law enforcement to record outgoing and/or incoming telephone numbers from a telephone line.

The last two years, plaintiffs have been using these decades-old statues against companies claiming that the use of internet technologies such as website chat boxes, web session recording tools, java scripts, pixels, cookies and other newfangled technologies constitute “wire tapping” or “eavesdropping” on website users.

And California courts who love to take old statutes and apply it to these new technologies – have basically said internet communications are protected from being ease dropped on.

Now California courts will have to address whether these new fangled technologies are also “pen-trap” “devices or processes” under 638.51. These new 638.51 cases involve technologies such as cookies, web beacons, java scripts, and pixels to obtain information about users and their devices as they browse websites and or mobile applications. The users are then analyzed by the website operator or a third party vendor to gather relevant information users’ online activities.

Section 638.51:

Section 638.51 prohibits the usage or installation of “pen registers” – a device or process that records or decodes dialing, routing, addressing, or signaling information (commonly known as DRAS) and “trap and trace” (pen-traps) – devices or processes traditionally used by law enforcement that allow one to record all numbers dialed on outgoing calls or numbers identifying incoming calls — without first obtaining a court order.

Unlike CIPA’s 631, which prohibits wiretapping — the real-time interception of the content of the communications without consent, CIPA 638.51 prohibits the collection of DRAS.

638.51 has limited exceptions including where a service provider’s customer consents to the device’s use or to protect the rights of a service provider’s property.

Breaking Down the Terminology:

The term “pen register” means a device or process that records or decodes DRAs “transmitted by an instrument or facility from which a wire or electronic communication is transmitted, but not the contents of a communication.” §638.50(b).

The term “trap and trace” focuses on incoming, rather than outgoing numbers, and means a “device or process that captures the incoming electronic or other impulses that identify the originating number or other dialing, routing, addressing, or signaling information reasonably likely to identify the source of a wire or electronic communication, but not the contents of a communication.” §638.50(c).

Lesh v. Cable News Network, Inc “CNN” and its precedent:

This new wave of CIPA litigation stems from a single recent decision, Greenley v. Kochava, where the CA court –allowed a “pen register” claim to move pass the motion to dismiss stage. In Kochava, plaintiff challenged the use of these new internet technologies and asserting that the defendant data broker’s software was able to collect a variety of data such as geolocation, search terms, purchase decisions, and spending habits. Applying the plain meaning to the word “process” the Kochava court concluded that “software that identifies consumers, gathers data, and correlates that data through unique ‘fingerprinting’ is a process that falls within CIPA’s pen register definition.”

The Kochava court noted that no other court had interpreted Section 638.51, and while pen registers were traditionally physical machines used by law enforcement to record outbound call from a telephone, “[t]oday pen registers take the form of software.” Accordingly the court held that the plaintiff adequately alleged that the software could collect DRAs and was a “pen register.”

Kochava paved the wave for 638.51 litigation – with hundreds of complaints filed since. The majority of these cases are being filed in Los Angeles Country Superior Court by the Pacific Trial Attorneys in Newport Beach.

In  Lesh v. Cable News Network, Inc, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of CIPA Section 638.51(a) which proscribes any “person” from “install[ing] or us[ing] a pen register or a trap and trace device without first obtaining a court order.”

Plaintiff alleges CNN uses three “Trackers” (PubMatic, Magnite, and Aniview), on its website which constitute “pen registers.” The complaint alleges to make CNN’s website load on a user’s browser, the browser sends “HTTP request” or “GET” request to CNN’s servers where the data is stored. In response to the request, CNN’s service sends an “HTTP response” back to the browser with a set of instructions how to properly display the website – i.e. what images to load, what text should appear, or what music should play.

These instructions cause the Trackers to be installed on a user’s browsers which then cause the browser to send identifying information – including users’ IP addresses to the Trackers to analyze data, create and analyze the performance of marketing campaigns, and target specific users for advertisements. Accordingly the Trackers are “pen registers” – so the complaint alleges.

On this basis, the Plaintiff is asking the court for an order to certify the class, and statutory damages in addition to attorney fees. The alleged class is as follows:

“Pursuant to Cal. Code Civ. Proc. § 382, Plaintiff seeks to represent a class defined as all California residents who accessed the Website in California and had their IP address collected by the Trackers (the “Class”).

The following people are excluded from the Class: (i) any Judge presiding over this action and members of her or her family; (ii) Defendant, Defendant’s subsidiaries, parents, successors, predecessors, and any entity in which Defendant or their parents have a controlling interest (including current and former employees, officers, or directors); (iii) persons who properly execute and file a timely request for exclusion from the Class; (iv) persons whose claims in this matter have been finally adjudicated on the merits or otherwise released; (v) Plaintiff’s counsel and Defendant’s counsel; and (vi) the legal representatives, successors, and assigns of any such excluded persons.”

Under this expansive definition of “pen-register,” plaintiffs are alleging that almost any device that can track a user’s web session activity falls within the definition of a pen-register.

We’ll keep an eye out on this one – but until more helpful case law develops, the Kochava decision will keep open the floodgate of these new CIPA suits. Companies should keep in mind that unlike the other CIPA cases under Section 631 and 632.7, 638.51 allows for a cause of action even where no “contents” are being “recorded” – making 638.51 easier to allege.

Additionally, companies should be mindful of CIPA’s consent exceptions and ensure they are obtaining consent to any technologies that may trigger CIPA.

2023 Cybersecurity Year In Review

2023 was another busy year in the realm of data event and cybersecurity litigations, with several noteworthy developments in the realm of disputes and regulator activity. Privacy World has been tracking these developments throughout the year. Read on for key trends and what to expect going into the 2024.

Growth in Data Events Leads to Accompanying Increase in Claims

The number of reportable data events in the U.S. in 2023 reached an all-time high, surpassing the prior record set in 2021. At bottom, threat actors continued to target entities across industries, with litigation frequently following disclosure of data events. On the dispute front, 2023 saw several notable cybersecurity consumer class actions concerning the alleged unauthorized disclosure of sensitive personal information, including healthcare, genetic, and banking information. Large putative class actions in these areas included, among others, lawsuits against the hospital system HCA Healthcare (estimated 11 million individuals involved in the underlying data event), DNA testing provider 23andMe (estimated 6.9 million individuals involved in the underlying data event), and mortgage business Mr. Cooper (estimated 14.6 million individuals involved in the underlying data event).

JPML Creates Several Notable Cybersecurity MDLs

In 2023 the Judicial Panel on Multidistrict Litigation (“JPML”) transferred and centralized several data event and cybersecurity putative class actions. This was a departure from prior years in which the JPML often declined requests to consolidate and coordinate pretrial proceedings in the wake of a data event. By way of example, following the largest data breach of 2023—the MOVEit hack affecting at least 55 million people—the JPML ordered that dozens of class actions regarding MOVEit software be consolidated for pretrial proceedings in the District of Massachusetts. Other data event litigations similarly received the MDL treatment in 2023, including litigations against SamsungOverby-Seawell Company, and T‑Mobile.

Significant Class Certification Rulings

Speaking of the development of precedent, 2023 had two notable decisions addressing class certification. While they arose in the cybersecurity context, these cases have broader applicability in other putative class actions. Following a remand from the Fourth Circuit, a judge in Maryland (in a MDL) re-ordered the certification of eight classes of consumers affected by a data breach suffered by Mariott. See In Re: Marriott International, Inc., Customer Data Security Breach Litigation,No. 8:19-md-02879, 2023 WL 8247865 (D. Md. Nov. 29, 2023). As explained here on PW, the court held that a class action waiver provision in consumers’ contracts did not require decertification because (1) Marriott waived the provision by requesting consolidation of cases in an MDL outside of the contract’s chosen venue, (2) the class action waiver was unconscionable and unenforceable, and (3) contractual provisions cannot override a court’s authority to certify a class under Rule 23.

The second notable decision came out of the Eleventh Circuit, where the Court of Appeals vacated a district court’s certification of a nationwide class of restaurant customers in a data event litigation. See Green-Cooper v. Brinker Int’l, Inc., No. 21-13146, 73 F. 4th 883 (11th Cir. July 11, 2023). In a 2-1 decision, a majority of the Court held that only one of the three named plaintiffs had standing under Article III of the U.S. Constitution, and remanded to the district court to reassess whether the putative class satisfied procedural requirements for a class. The two plaintiffs without standing dined at one of the defendant’s restaurants either before or after the time period that the restaurant was impacted by the data event, which the Fourth Circuit held to mean that any injury the plaintiffs suffered could not be traced back to defendant.

Standing Challenges Persist for Plaintiffs in Data Event and Cybersecurity Litigations

Since the Supreme Court’s TransUnion decision in 2021, plaintiffs in data breach cases have continued to face challenges getting into or staying in federal court, and opinions like Brinker reiterate that Article III standing issues are relevant at every stage in litigation, including class certification. See, also, e.g.Holmes v. Elephant Ins. Co., No. 3:22-cv-00487, 2023 WL 4183380 (E.D. Va. June 26, 2023) (dismissing class action complaint alleging injuries from data breach for lack of standing). Looking ahead to 2024, it is possible that more data litigation plays out in state court rather than federal court—particularly in the Eleventh Circuit but also elsewhere—as a result.

Cases Continue to Reach Efficient Pre-Trial Resolution

Finally in the dispute realm, several large cybersecurity litigations reached pre-trial resolutions in 2023. The second-largest data event settlement ever—T-Mobile’s $350 million settlement fund with $150 million in data spend—received final approval from the trial court. And software company Blackbaud settled claims relating to a 2020 ransomware incident with 49 states Attorneys General and the District of Columbia to the tune of $49.5 million. Before the settlement, Blackbaud was hit earlier in the year with a $3 million fine from the Securities and Exchange Commission. The twin payouts by Blackbaud are cautionary reminders that litigation and regulatory enforcement on cyber incidents often go-hand-in-hand, with multifaceted risks in the wake of a data event.

FTC and Cybersecurity

Regulators were active on the cybersecurity front in 2023, as well. Following shortly after a policy statement by the Health and Human Resources Office of Civil Rights policy Bulletin on use of trackers in compliance with HIPAA, the FTC announced settlement of enforcement actions against GoodRxPremom, and BetterHelp for sharing health data via tracking technologies with third parties resulting in a breach of Personal Health Records under the Health Breach Notification Rule. The FTC also settled enforcement actions against Chegg and Drizly for inadequate cybersecurity practices which led to data breaches. In both cases, the FTC faulted the companies for failure to implement appropriate cybersecurity policies and procedures, access controls, and securely store access credentials for company databases (among other issues).

Notably, in Drizly matter, the FTC continued ta trend of holding corporate executives responsible individually for his failure to implement “or properly delegate responsibility to implement, reasonable information security practices.” Under the consent decree, Drizly’s CEO must implement a security program (either at Drizly or any company to which he might move that processes personal information of 25,000 or more individuals and where he is a majority owner, CEO, or other senior officer with information security responsibilities).

SEC’s Focus on Cyber Continues

The SEC was also active in cybersecurity. In addition to the regulatory enforcement action against Blackbaud mentioned above, the SEC initiated an enforcement action against a software company for a cybersecurity incident disclosed in 2020. In its complaint, the SEC alleged that the company “defrauded…investors and customers through misstatements, omissions, and schemes that concealed both the Company’s poor cybersecurity practices and its heightened—and increasing—cybersecurity risks” through its public statements regarding its cybersecurity practices and risks. Like the Drizly matter, the SEC charged a senior company executive individually—in this case, the company’s CISO—for concealing the cybersecurity deficiencies from investors. The matter is currently pending. These cases reinforce that regulators will continue to hold senior executives responsible for oversight and implementation of appropriate cybersecurity programs.

Notable Federal Regulatory Developments

Regulators were also active in issuing new regulations on the cybersecurity front in 2023. In addition to its cybersecurity regulatory enforcement actions, the FTC amended the GLBA Safeguards Rule. Under the amended Rule, non-bank financial institutions must provide notice to notify the FTC as soon as possible, and no later than 30 days after discovery, of any security breach involving the unencrypted information of 500 or more consumers.

Additionally, in March 2024, the SEC proposed revisions to Regulation S-P, Rule 10 and form SCIR, and Regulation SCI aimed at imposing new incident reporting and cybersecurity program requirements for various covered entities. You can read PW’s coverage of the proposed amendments here. In July, the SEC also finalized its long-awaited Cybersecurity Risk Management and Incident Disclosure Regulations. Under the final Regulations, public companies are obligated to report regarding material cybersecurity risks, cybersecurity risk management and governance, and board of directors’ oversight of cybersecurity risks in their annual 10-K reports. Additionally, covered entities are required to report material cybersecurity incidents within four business days of determining materiality. PW’s analysis of the final Regulations are here.

New State Cybersecurity Regulations

The New York Department of Financial Services also finalized amendments to its landmark Cybersecurity Regulations in 2023. In the amended Regulations, NYDFS creates a new category of companies subject to heightened cybersecurity standards: Class A Companies. These heightened cybersecurity standards would apply only to the largest financial institutions (i.e., entities with at least $20 million in gross annual revenues over the last 2 fiscal years, and either (1) more than 2,000 employees; or (2) over $1 billion in gross annual revenue over the last 2 fiscal years). The enhanced requirements include independent cybersecurity audits, enhanced privileged access management controls, and endpoint detection and response with centralized logging (unless otherwise approved in writing by the CISO). New cybersecurity requirements for other covered entities include annual review and approval of company cybersecurity policy by a senior officer or the senior governing body (i.e., board of directors), CISO reporting to the senior governing body, senior governing body oversight, and access controls and privilege management, among others. PW’s analysis of the amended NYDFS Cybersecurity Regulations is here.

On the state front, California Privacy Protection Agency issued draft cybersecurity assessment regulations as required by the CCPA. Under the draft regulations, if a business’s “processing of consumers’ personal information presents significant risk to consumers’ security”, that business must conduct a cybersecurity audit. If adopted as proposed, companies that process a (yet undetermined) threshold number of items of personal information, sensitive personal information, or information regarding consumers under 16, as well as companies that exceed a gross revenue threshold will be considered “high risk.” The draft regulations outline detailed criteria for evaluating businesses’ cybersecurity program and documenting the audit. The draft regulations anticipate that the audit results will be reported to the business’s board of directors or governing body and that a representative of that body will certify that the signatory has reviewed and understands the findings of the audit. If adopted, businesses will be obligated to certify compliance with the audit regulations to the CPPA. You can read PW’s analysis of the implications of the proposed regulations here.

Consistent with 2023 enforcement priorities, new regulations issued this year make clear that state and federal regulators are increasingly holding senior executives and boards of directors responsible for oversight of cybersecurity programs. With regulations explicitly requiring oversight of cybersecurity risk management, the trend toward holding individual executives responsible for egregious cybersecurity lapses is likely to continue into 2024 and beyond.

Looking Forward

2023 demonstrated “the more things change, the more they stay the same.” Cybersecurity litigation trends were a continuation the prior two years. Something to keep an eye on in 2024 remains the potential for threatened individual officer and director liability in the wake of a widespread cyberattack. While the majority of cybersecurity litigations filed continue to be brought on behalf of plaintiffs whose personal information was purportedly disclosed, shareholders and regulators will increasingly look to hold executives responsible for failing to adopt reasonable security measures to prevent cyberattacks in the first instance.

Needless to say, 2024 should be another interesting year on the cybersecurity front. This is particularly so for data event litigations and for data developments more broadly.

For more news on Data Event and Cybersecurity Litigations in 2023, visit the NLR Communications, Media & Internet section.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.