Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

The Increasing Role of Cybersecurity Experts in Complex Legal Disputes

The testimonies and guidance of expert witnesses have been known to play a significant role in high-stakes legal matters, whether it be the opinion of a clinical psychiatrist in a homicide case or that of a career IP analyst in a patent infringement trial. However, in today’s highly digital world—where cybercrimes like data breaches and theft of intellectual property are increasingly commonplace—cybersecurity professionals have become some of the most sought-after experts for a broadening range of legal disputes.

Below, we will explore the growing importance of cybersecurity experts to the litigation industry in more depth, including how their insights contribute to case strategies, the challenges of presenting technical and cybersecurity-related arguments in court, the specific qualifications that make an effective expert witness in the field of cybersecurity, and the best method for securing that expertise for your case.

How Cybersecurity Experts Help Shape Legal Strategies

Disputes involving highly complex cybercrimes typically require more technical expertise than most trial teams have on hand, and the contributions of a qualified cybersecurity expert can often be transformative to your ability to better understand the case, uncover critical evidence, and ultimately shape your overall strategy.

For example, in the case of a criminal data breach, defense counsel might seek an expert witness to analyze and evaluate the plaintiff’s existing cybersecurity policies and protective mechanisms at the time of the attack to determine their effectiveness and/or compliance with industry regulations or best practices. Similarly, an expert with in-depth knowledge of evolving data laws, standards, and disclosure requirements will be well-suited to determining a party’s liability in virtually any matter involving the unauthorized access of protected information. Cybersecurity experts are also beneficial during the discovery phase when their experience working with certain systems can assist in potentially uncovering evidence related to a specific attack or breach that may have been initially overlooked.

We have already seen many instances in which the testimony and involvement of cybersecurity experts have impacted the overall direction of a legal dispute. Consider the Coalition for Good Governance, for example, that recently rested its case(Opens an external site in a new window) as the plaintiffs in a six-year battle with the state of Georgia over the security of touchscreen voting machines. Throughout the process, the organization relied heavily on the testimony of multiple cybersecurity experts who claimed they identified vulnerabilities in the state’s voting technology. If these testimonies prove effective, it will not only sway the ruling in the favor of the plaintiffs but also lead to entirely new policies and impact the very way in which Georgia voters cast their ballots as early as this year.

The Challenges of Explaining Cybersecurity in the Courtroom

While there is no denying the growing importance of cybersecurity experts in modern-day disputes, it is also important to note that many challenges still exist in presenting highly technical arguments and/or evidence in a court of law.

Perhaps most notably, there remains a significant gap in both legal and technological language, as well as in the knowledge and understanding of cybersecurity professionals and judges, lawyers, and the juries tasked with parsing particularly dense information. In other words, today’s trial teams need to work carefully with cybersecurity experts to develop communication strategies that adequately illustrate their arguments but do not result in unnecessary confusion or a misunderstanding of the evidence being presented. Visuals are a particularly useful tool in helping both litigators and experts explain complex topics while also engaging decision-makers.

Depending on the nature of the data breach or cybercrime in question, you may be tasked with replicating a digital event to support your specific argument. In many cases, this can be incredibly challenging due to the evolving and multifaceted nature of modern cyberattacks, and it may require extensive resources within the time constraints of a given matter. Thus, it is wise to use every tool at your disposal to boost the power of your team—including custom expert witness sourcing and visual advocacy consultants.

What You Should Look for in a Cybersecurity Expert

Determining the qualifications of a cybersecurity expert is highly dependent on the details of each individual case, making it critical to identify an expert whose experience reflects your precise needs. For example, a digital forensics specialist will offer an entirely different skill set than someone with a background in data privacy regulations and compliance.

Making sure an expert has the relevant professional experience to assess your specific cybersecurity case is only one factor to consider. In addition to verifying education and professional history, you must also assess the expert’s experience in the courtroom and familiarity with relevant legal processes. Similarly, expert witnesses should be evaluated based on their individual personality and communication skills, as they will be tasked with conveying highly technical arguments to an audience that will likely have a difficult time understanding all relevant concepts in the absence of clear, simplified explanations.

Where to Find the Most Qualified Cybersecurity Experts

Safeguarding the success of your client or firm in the digital age starts with the right expertise. You need to be sure your cybersecurity expert is uniquely suited to your case and primed to share critical insights when the stakes are high.

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.

President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article 

An Update on the SEC’s Cybersecurity Reporting Rules

As we pass the two-month anniversary of the effectiveness of the U.S. Securities and Exchange Commission’s (“SEC’s”) Form 8-K cybersecurity reporting rules under new Item 1.05, this blog post provides a high-level summary of the filings made to date.

Six companies have now made Item 1.05 Form 8-K filings. Three of these companies also have amended their first Form 8-K filings to provide additional detail regarding subsequent events. The remainder of the filings seem self-contained such that no amendment is necessary, but these companies may amend at a later date. In general, the descriptions of the cybersecurity incidents have been written at a high level and track the requirements of the new rules without much elaboration. It is interesting, but perhaps coincidental, that the filings seem limited to two broad industry groups: technology and financial services. In particular, two of the companies are bank holding companies.

Although several companies have now made reports under the new rules, the sample space may still be too small to draw any firm conclusions or decree what is “market.” That said, several of the companies that have filed an 8-K under Item 1.05 have described incidents and circumstances that do not seem to be financially material to the particular companies. We are aware of companies that have made materiality determinations in the past on the basis of non-financial qualitative factors when impacts of a cyber incident are otherwise quantitatively immaterial, but these situations are more the exception than the rule.

There is also a great deal of variability among the forward-looking statement disclaimers that the companies have included in the filings in terms of specificity and detail. Such a disclaimer is not required in a Form 8-K, but every company to file under Item 1.05 to date has included one. We believe this practice will continue.

Since the effectiveness of the new rules, a handful of companies have filed Form 8-K filings to describe cybersecurity incidents under Item 8.01 (“Other Events”) instead of Item 1.05. These filings have approximated the detail of what is required under Item 1.05. It is not immediately evident why these companies chose Item 8.01, but presumably the companies determined that the events were immaterial such that no filing under Item 1.05 was necessary at the time of filing. Of course, the SEC filing is one piece of a much larger puzzle when a company is working through a cyber incident and related remediation. It remains to be seen how widespread this practice will become. To date, the SEC staff has not publicly released any comment letters critiquing any Form 8-K cyber filing under the new rules, but it is still early in the process. The SEC staff usually (but not always) makes its comment letters and company responses to those comment letters public on the SEC’s EDGAR website no sooner than 20 business days after it has completed its review. With many public companies now also making the new Form 10-K disclosure on cybersecurity, we anticipate the staff will be active in providing guidance and commentary on cybersecurity disclosures in the coming year.

2023 Cybersecurity Year In Review

2023 was another busy year in the realm of data event and cybersecurity litigations, with several noteworthy developments in the realm of disputes and regulator activity. Privacy World has been tracking these developments throughout the year. Read on for key trends and what to expect going into the 2024.

Growth in Data Events Leads to Accompanying Increase in Claims

The number of reportable data events in the U.S. in 2023 reached an all-time high, surpassing the prior record set in 2021. At bottom, threat actors continued to target entities across industries, with litigation frequently following disclosure of data events. On the dispute front, 2023 saw several notable cybersecurity consumer class actions concerning the alleged unauthorized disclosure of sensitive personal information, including healthcare, genetic, and banking information. Large putative class actions in these areas included, among others, lawsuits against the hospital system HCA Healthcare (estimated 11 million individuals involved in the underlying data event), DNA testing provider 23andMe (estimated 6.9 million individuals involved in the underlying data event), and mortgage business Mr. Cooper (estimated 14.6 million individuals involved in the underlying data event).

JPML Creates Several Notable Cybersecurity MDLs

In 2023 the Judicial Panel on Multidistrict Litigation (“JPML”) transferred and centralized several data event and cybersecurity putative class actions. This was a departure from prior years in which the JPML often declined requests to consolidate and coordinate pretrial proceedings in the wake of a data event. By way of example, following the largest data breach of 2023—the MOVEit hack affecting at least 55 million people—the JPML ordered that dozens of class actions regarding MOVEit software be consolidated for pretrial proceedings in the District of Massachusetts. Other data event litigations similarly received the MDL treatment in 2023, including litigations against SamsungOverby-Seawell Company, and T‑Mobile.

Significant Class Certification Rulings

Speaking of the development of precedent, 2023 had two notable decisions addressing class certification. While they arose in the cybersecurity context, these cases have broader applicability in other putative class actions. Following a remand from the Fourth Circuit, a judge in Maryland (in a MDL) re-ordered the certification of eight classes of consumers affected by a data breach suffered by Mariott. See In Re: Marriott International, Inc., Customer Data Security Breach Litigation,No. 8:19-md-02879, 2023 WL 8247865 (D. Md. Nov. 29, 2023). As explained here on PW, the court held that a class action waiver provision in consumers’ contracts did not require decertification because (1) Marriott waived the provision by requesting consolidation of cases in an MDL outside of the contract’s chosen venue, (2) the class action waiver was unconscionable and unenforceable, and (3) contractual provisions cannot override a court’s authority to certify a class under Rule 23.

The second notable decision came out of the Eleventh Circuit, where the Court of Appeals vacated a district court’s certification of a nationwide class of restaurant customers in a data event litigation. See Green-Cooper v. Brinker Int’l, Inc., No. 21-13146, 73 F. 4th 883 (11th Cir. July 11, 2023). In a 2-1 decision, a majority of the Court held that only one of the three named plaintiffs had standing under Article III of the U.S. Constitution, and remanded to the district court to reassess whether the putative class satisfied procedural requirements for a class. The two plaintiffs without standing dined at one of the defendant’s restaurants either before or after the time period that the restaurant was impacted by the data event, which the Fourth Circuit held to mean that any injury the plaintiffs suffered could not be traced back to defendant.

Standing Challenges Persist for Plaintiffs in Data Event and Cybersecurity Litigations

Since the Supreme Court’s TransUnion decision in 2021, plaintiffs in data breach cases have continued to face challenges getting into or staying in federal court, and opinions like Brinker reiterate that Article III standing issues are relevant at every stage in litigation, including class certification. See, also, e.g.Holmes v. Elephant Ins. Co., No. 3:22-cv-00487, 2023 WL 4183380 (E.D. Va. June 26, 2023) (dismissing class action complaint alleging injuries from data breach for lack of standing). Looking ahead to 2024, it is possible that more data litigation plays out in state court rather than federal court—particularly in the Eleventh Circuit but also elsewhere—as a result.

Cases Continue to Reach Efficient Pre-Trial Resolution

Finally in the dispute realm, several large cybersecurity litigations reached pre-trial resolutions in 2023. The second-largest data event settlement ever—T-Mobile’s $350 million settlement fund with $150 million in data spend—received final approval from the trial court. And software company Blackbaud settled claims relating to a 2020 ransomware incident with 49 states Attorneys General and the District of Columbia to the tune of $49.5 million. Before the settlement, Blackbaud was hit earlier in the year with a $3 million fine from the Securities and Exchange Commission. The twin payouts by Blackbaud are cautionary reminders that litigation and regulatory enforcement on cyber incidents often go-hand-in-hand, with multifaceted risks in the wake of a data event.

FTC and Cybersecurity

Regulators were active on the cybersecurity front in 2023, as well. Following shortly after a policy statement by the Health and Human Resources Office of Civil Rights policy Bulletin on use of trackers in compliance with HIPAA, the FTC announced settlement of enforcement actions against GoodRxPremom, and BetterHelp for sharing health data via tracking technologies with third parties resulting in a breach of Personal Health Records under the Health Breach Notification Rule. The FTC also settled enforcement actions against Chegg and Drizly for inadequate cybersecurity practices which led to data breaches. In both cases, the FTC faulted the companies for failure to implement appropriate cybersecurity policies and procedures, access controls, and securely store access credentials for company databases (among other issues).

Notably, in Drizly matter, the FTC continued ta trend of holding corporate executives responsible individually for his failure to implement “or properly delegate responsibility to implement, reasonable information security practices.” Under the consent decree, Drizly’s CEO must implement a security program (either at Drizly or any company to which he might move that processes personal information of 25,000 or more individuals and where he is a majority owner, CEO, or other senior officer with information security responsibilities).

SEC’s Focus on Cyber Continues

The SEC was also active in cybersecurity. In addition to the regulatory enforcement action against Blackbaud mentioned above, the SEC initiated an enforcement action against a software company for a cybersecurity incident disclosed in 2020. In its complaint, the SEC alleged that the company “defrauded…investors and customers through misstatements, omissions, and schemes that concealed both the Company’s poor cybersecurity practices and its heightened—and increasing—cybersecurity risks” through its public statements regarding its cybersecurity practices and risks. Like the Drizly matter, the SEC charged a senior company executive individually—in this case, the company’s CISO—for concealing the cybersecurity deficiencies from investors. The matter is currently pending. These cases reinforce that regulators will continue to hold senior executives responsible for oversight and implementation of appropriate cybersecurity programs.

Notable Federal Regulatory Developments

Regulators were also active in issuing new regulations on the cybersecurity front in 2023. In addition to its cybersecurity regulatory enforcement actions, the FTC amended the GLBA Safeguards Rule. Under the amended Rule, non-bank financial institutions must provide notice to notify the FTC as soon as possible, and no later than 30 days after discovery, of any security breach involving the unencrypted information of 500 or more consumers.

Additionally, in March 2024, the SEC proposed revisions to Regulation S-P, Rule 10 and form SCIR, and Regulation SCI aimed at imposing new incident reporting and cybersecurity program requirements for various covered entities. You can read PW’s coverage of the proposed amendments here. In July, the SEC also finalized its long-awaited Cybersecurity Risk Management and Incident Disclosure Regulations. Under the final Regulations, public companies are obligated to report regarding material cybersecurity risks, cybersecurity risk management and governance, and board of directors’ oversight of cybersecurity risks in their annual 10-K reports. Additionally, covered entities are required to report material cybersecurity incidents within four business days of determining materiality. PW’s analysis of the final Regulations are here.

New State Cybersecurity Regulations

The New York Department of Financial Services also finalized amendments to its landmark Cybersecurity Regulations in 2023. In the amended Regulations, NYDFS creates a new category of companies subject to heightened cybersecurity standards: Class A Companies. These heightened cybersecurity standards would apply only to the largest financial institutions (i.e., entities with at least $20 million in gross annual revenues over the last 2 fiscal years, and either (1) more than 2,000 employees; or (2) over $1 billion in gross annual revenue over the last 2 fiscal years). The enhanced requirements include independent cybersecurity audits, enhanced privileged access management controls, and endpoint detection and response with centralized logging (unless otherwise approved in writing by the CISO). New cybersecurity requirements for other covered entities include annual review and approval of company cybersecurity policy by a senior officer or the senior governing body (i.e., board of directors), CISO reporting to the senior governing body, senior governing body oversight, and access controls and privilege management, among others. PW’s analysis of the amended NYDFS Cybersecurity Regulations is here.

On the state front, California Privacy Protection Agency issued draft cybersecurity assessment regulations as required by the CCPA. Under the draft regulations, if a business’s “processing of consumers’ personal information presents significant risk to consumers’ security”, that business must conduct a cybersecurity audit. If adopted as proposed, companies that process a (yet undetermined) threshold number of items of personal information, sensitive personal information, or information regarding consumers under 16, as well as companies that exceed a gross revenue threshold will be considered “high risk.” The draft regulations outline detailed criteria for evaluating businesses’ cybersecurity program and documenting the audit. The draft regulations anticipate that the audit results will be reported to the business’s board of directors or governing body and that a representative of that body will certify that the signatory has reviewed and understands the findings of the audit. If adopted, businesses will be obligated to certify compliance with the audit regulations to the CPPA. You can read PW’s analysis of the implications of the proposed regulations here.

Consistent with 2023 enforcement priorities, new regulations issued this year make clear that state and federal regulators are increasingly holding senior executives and boards of directors responsible for oversight of cybersecurity programs. With regulations explicitly requiring oversight of cybersecurity risk management, the trend toward holding individual executives responsible for egregious cybersecurity lapses is likely to continue into 2024 and beyond.

Looking Forward

2023 demonstrated “the more things change, the more they stay the same.” Cybersecurity litigation trends were a continuation the prior two years. Something to keep an eye on in 2024 remains the potential for threatened individual officer and director liability in the wake of a widespread cyberattack. While the majority of cybersecurity litigations filed continue to be brought on behalf of plaintiffs whose personal information was purportedly disclosed, shareholders and regulators will increasingly look to hold executives responsible for failing to adopt reasonable security measures to prevent cyberattacks in the first instance.

Needless to say, 2024 should be another interesting year on the cybersecurity front. This is particularly so for data event litigations and for data developments more broadly.

For more news on Data Event and Cybersecurity Litigations in 2023, visit the NLR Communications, Media & Internet section.

Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

Software as a Medical Device: Challenges Facing the Industry

SaMD Blog Series: Introduction

Editor’s Note: We are excited to announce that this article is the first of a series addressing Software as a Medical Device and the issues that plague digital health companies, investors, clinicians and other organizations that utilize software and medical devices. We will be addressing various considerations including technology, data, intellectual property, licensing, and contracting.

The intersection of software, technology and health care and the proliferation of software as a medical device in the health care arena has become common place and has spurred significant innovations. The term Software as a Medical Device (SaMD) is defined by the International Medical Device Regulators Forum as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” In other words, SaMD need not be part of a physical device to achieve its intended purpose. For instance, SaMD could be an application on a mobile phone and not be connected to a physical medical device.

With the proliferation of SaMD also comes the need for those building and using it to firmly grasp legal and regulatory considerations to ensure successful use and commercialization. Over the next several weeks, we will be addressing some of more common issues faced by digital health companies, investors, innovators, and clinicians when developing, utilizing, or commercializing SaMD. The Food and Drug Administration (FDA) has already cleared a significant amount of SaMD, including more than 500 algorithms employing artificial intelligence (AI). Some notable examples include FDA-cleared SaMD such as wearable technology for remote patient monitoring; doctor prescribed video game treatment for children with ADHD; fully immersive virtual reality tools for both physical therapy and mental wellness; and end to end software that generates 3D printed models to better plan surgery and reduce operation time. With this rapid innovation comes a host of legal and regulatory considerations which will be discussed over the course of this SaMD Blog Series.

General Intellectual Property (IP) Considerations for SaMD

This edition will discuss the sophisticated IP strategies that can be used to protect innovations for the three categories of software for biomedical applications: SaMD, software in a medical device, and software used in the manufacture or maintenance of a medical device, including clinical trial collaboration and sponsored research agreements, filing patent applications, and pursuing other forms of protection, such as trade secrets.

Licensing and Contracting with Third Parties for SaMD

This edition will unpack engaging with third parties practically and comprehensively, whether in the context of (i) developing new SaMD or (ii) refining or testing existing SaMD. Data and IP can be effectively either owned or licensed, provided such licenses protect the future interests of the licensee. Such ownership and licensing are particularly important in the AI and machine learning space, which is one area of focus for this edition.

FDA Considerations for SaMD

This edition will explore how FDA is regulating SaMD, which will include a discussion of what constitutes a regulated device, legislative actions to spur innovation, and how FDA is approaching regulation of specific categories of SaMD such as clinical decision support software, general wellness applications, and other mobile medical devices. It will also examine the different regulatory pathways for SaMD and FDA’s current focus on Cybersecurity issues for software.

Health Care Regulatory and Reimbursement Considerations for SAMD

This edition will discuss the intersection of remote monitoring services and SaMD, prescription digital therapeutics and how they intersect with SaMD, licensure and distributor considerations associated with commercializing SaMD, and the growing trend to seek out device specific codes for SaMD.

Our hope is that this series will be a starting point for digital health companies, investors, innovators, and clinicians as each approaches development and use of SaMD as part of their business and clinical offerings.

© 2023 Foley & Lardner LLP

For more information on Healthcare, click here to visit the National Law Review.

 

Privacy Tip #359 – GoodRx Settles with FTC for Sharing Health Information for Advertising

The Federal Trade Commission (FTC) announced on February 1, 2023 that it has settled, for $1.5M, its first enforcement action under its Health Breach Notification Rule against GoodRx Holdings, Inc., a telehealth and prescription drug provider.

According to the press release, the FTC alleged that GoodRx failed “to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.”

In the proposed federal court order (the Order), GoodRx will be “prohibited from sharing user health data with applicable third parties for advertising purposes.” The complaint alleged that GoodRx told consumers that it would not share personal health information, and it monetized users’ personal health information by sharing consumers’ information with third parties such as Facebook and Instagram to help target users with ads for personalized health and medication-specific ads.

The complaint also alleged that GoodRx “compiled lists of its users who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles. GoodRx then used that information to target these users with health-related advertisements.” It also alleges that those third parties then used the information received from GoodRx for their own internal purposes to improve the effectiveness of the advertising.

The proposed Order must be approved by a federal court before it can take effect. To address the FTC’s allegations, the Order prohibits the sharing of health data for ads; requires user consent for any other sharing; stipulates that the company must direct third parties to delete consumer health data; limits the retention of data; and implement a mandated privacy program. Click here to read the press release.

Copyright © 2023 Robinson & Cole LLP. All rights reserved.