Mandatory Cybersecurity Incident Reporting: The Dawn of a New Era for Businesses

A significant shift in cybersecurity compliance is on the horizon, and businesses need to prepare. Starting in 2024, organizations will face new requirements to report cybersecurity incidents and ransomware payments to the federal government. This change stems from the U.S. Department of Homeland Security’s (DHS) Cybersecurity Infrastructure and Security Agency (CISA) issuing a Notice of Proposed Rulemaking (NPRM) on April 4, 2024. This notice aims to enforce the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Essentially, this means that “covered entities” must report specific cyber incidents and ransom payments to CISA within defined timeframes.

Background

Back in March 2022, President Joe Biden signed CIRCIA into law. This was a big step towards improving America’s cybersecurity. The law requires CISA to create and enforce regulations mandating that covered entities report cyber incidents and ransom payments. The goal is to help CISA quickly assist victims, analyze trends across different sectors, and share crucial information with network defenders to prevent other potential attacks.

The proposed rule is open for public comments until July 3, 2024. After this period, CISA has 18 months to finalize the rule, with an expected implementation date around October 4, 2025. The rule should be effective in early 2026. This document provides an overview of the NPRM, highlighting its key points from the detailed Federal Register notice.

Cyber Incident Reporting Initiatives

CIRCIA includes several key requirements for mandatory cyber incident reporting:

  • Cyber Incident Reporting Requirements – CIRCIA mandates that CISA develop regulations requiring covered entities to report any covered cyber incidents within 72 hours from the time the entity reasonably believes the incident occurred.
  • Federal Incident Report Sharing – Any federal entity receiving a report on a cyber incident after the final rule’s effective date must share that report with CISA within 24 hours. CISA will also need to make information received under CIRCIA available to certain federal agencies within the same timeframe.
  • Cyber Incident Reporting Council – The Department of Homeland Security (DHS) must establish and chair an intergovernmental Cyber Incident Reporting Council to coordinate, deconflict, and harmonize federal incident reporting requirements.

Ransomware Initiatives

CIRCIA also authorizes or mandates several initiatives to combat ransomware:

  • Ransom Payment Reporting Requirements – CISA must develop regulations requiring covered entities to report to CISA within 24 hours of making any ransom payments due to a ransomware attack. These reports must be shared with federal agencies similarly to cyber incident reports.
  • Ransomware Vulnerability Warning Pilot Program – CISA must establish a pilot program to identify systems vulnerable to ransomware attacks and may notify the owners of these systems.
  • Joint Ransomware Task Force – CISA has announced the launch of the Joint Ransomware Task Force to build on existing efforts to coordinate a nationwide campaign against ransomware attacks. This task force will work closely with the Federal Bureau of Investigation and the Office of the National Cyber Director.

Scope of Applicability

The regulation targets many “covered entities” within critical infrastructure sectors. CISA clarifies that “covered entities” encompass more than just owners and operators of critical infrastructure systems and assets. Entities actively participating in these sectors might be considered “in the sector,” even if they are not critical infrastructure themselves. Entities uncertain about their status are encouraged to contact CISA.

Critical Infrastructure Sectors

CISA’s interpretation includes entities within one of the 16 sectors defined by Presidential Policy Directive 21 (PPD 21). These sectors include Chemical, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare and Public Health, Information Technology, Nuclear Reactors, Materials, and Waste, Transportation Systems, Water and Wastewater Systems.

Covered Entities

CISA aims to include small businesses that own and operate critical infrastructure by setting additional sector-based criteria. The proposed rule applies to organizations falling into one of two categories:

  1. Entities operating within critical infrastructure sectors, except small businesses
  2. Entities in critical infrastructure sectors that meet sector-based criteria, even if they are small businesses

Size-Based Criteria

The size-based criteria use Small Business Administration (SBA) standards, which vary by industry and are based on annual revenue and number of employees. Entities in critical infrastructure sectors exceeding these thresholds are “covered entities.” The SBA standards are updated periodically, so organizations must stay informed about the current thresholds applicable to their industry.

Sector-Based Criteria

The sector-based criteria target essential entities within a sector, regardless of size, based on the potential consequences of disruption. The proposed rule outlines specific criteria for nearly all 16 critical infrastructure sectors. For instance, in the information technology sector, the criteria include:

  • Entities providing IT services for the federal government
  • Entities developing, licensing, or maintaining critical software
  • Manufacturers, vendors, or integrators of operational technology hardware or software
  • Entities involved in election-related information and communications technology

In the healthcare and public health sector, the criteria include:

  • Hospitals with 100 or more beds
  • Critical access hospitals
  • Manufacturers of certain drugs or medical devices

Covered Cyber Incidents

Covered entities must report “covered cyber incidents,” which include significant loss of confidentiality, integrity, or availability of an information system, serious impacts on operational system safety and resiliency, disruption of business or industrial operations, and unauthorized access due to third-party service provider compromises or supply chain breaches.

Significant Incidents

This definition covers substantial cyber incidents regardless of their cause, such as third-party compromises, denial-of-service attacks, and vulnerabilities in open-source code. However, threats or activities responding to owner/operator requests are not included. Substantial incidents include encryption of core systems, exploitation causing extended downtime, and ransomware attacks on industrial control systems.

Reporting Requirements

Covered entities must report cyber incidents to CISA within 72 hours of reasonably believing an incident has occurred. Reports must be submitted via a web-based “CIRCIA Incident Reporting Form” on CISA’s website and include extensive details about the incident and ransom payments.

Report Types and Timelines

  • Covered Cyber Incident Reports within 72 hours of identifying an incident
  • Ransom Payment Reports due to a ransomware attack within 24 hours of payment
  • Joint Covered Cyber Incident and Ransom Payment Reports within 72 hours for ransom payment incidents
  • Supplemental Reports within 24 hours if new information or additional payments arise

Entities must retain data used for reports for at least two years. They can authorize a third party to submit reports on their behalf but remain responsible for compliance.

Exemptions for Similar Reporting

Covered entities may be exempt from CIRCIA reporting if they have already reported to another federal agency, provided an agreement exists between CISA and that agency. This agreement must ensure the reporting requirements are substantially similar, and the agency must share information with CISA. Federal agencies that report to CISA under the Federal Information Security Modernization Act (FISMA) are exempt from CIRCIA reporting.

These agreements are still being developed. Entities reporting to other federal agencies should stay informed about their progress to understand how they will impact their reporting obligations under CIRCIA.

Enforcement and Penalties

The CISA director can make a request for information (RFI) if an entity fails to submit a required report. Non-compliance can lead to civil action or court orders, including penalties such as disbarment and restrictions on future government contracts. False statements in reports may result in criminal penalties.

Information Protection

CIRCIA protects reports and RFI responses, including immunity from enforcement actions based solely on report submissions and protections against legal discovery and use in proceedings. Reports are exempt from Freedom of Information Act (FOIA) disclosures, and entities can designate reports as “commercial, financial, and proprietary information.” Information can be shared with federal agencies for cybersecurity purposes or specific threats.

Business Takeaways

Although the rule will not be effective until late 2025, companies should begin preparing now. Entities should review the proposed rule to determine if they qualify as covered entities and understand the reporting requirements, then adjust their security programs and incident response plans accordingly. Creating a regulatory notification chart can help track various incident reporting obligations. Proactive measures and potential formal comments on the proposed rule can aid in compliance once the rules are finalized.

These steps are designed to guide companies in preparing for CIRCIA, though each company must assess its own needs and procedures within its specific operational, business, and regulatory context.

Listen to this post

On July 1, 2024, Texas May Have the Strongest Consumer Data Privacy Law in the United States

It’s Bigger. But is it Better?

They say everything is bigger in Texas which includes big privacy protection. After the Texas Senate approved HB 4 — the Texas Data Privacy and Security Act (“TDPSA”), on June 18, 2023, Texas became the eleventh state to enact comprehensive privacy legislation.[1]

Like many state consumer data privacy laws enacted this year, TDPSA is largely modeled after the Virginia Consumer Data Protection Act.[2] However, the law contains several unique differences and drew significant pieces from recently enacted consumer data privacy laws in Colorado and Connecticut, which generally include “stronger” provisions than the more “business-friendly” laws passed in states like Utah and Iowa.

Some of the more notable provisions of the bill are described below:

More Scope Than You Can Shake a Stick At!

  • The TDPSA applies much more broadly than any other pending or effective state consumer data privacy act, pulling in individuals as well as businesses regardless of their revenues or the number of individuals whose personal data is processed or sold.
  • The TDPSA applies to any individual or business that meets all of the following criteria:
    • conduct business in Texas (or produce goods or services consumed in Texas) and,
    •  process or sell personal data:
      • The “processing or sale of personal data” further expands the applicability of the TDPSA to include individuals and businesses that engage in any operations involving personal data, such as the “collection, use, storage, disclosure, analysis, deletion, or modification of personal data.”
      • In short, collecting, storing or otherwise handling the personal data of any resident of Texas, or transferring that data for any consideration, will likely meet this standard.
  • Uniquely, the carveout for “small businesses” excludes from coverage those entities that meet the definition of “a small business as defined by the United States Small Business Administration.”[3]
  • The law requires all businesses, including small businesses, to obtain opt-in consent before processing sensitive personal data.
  • Similar to other state comprehensive privacy laws, TDPSA excludes state agencies or political subdivisions of Texas, financial institutions subject to Title V of the Gramm-Leach-Bliley Act, covered entities and business associates governed by HIPAA, nonprofit organizations, and institutions of higher education. But, TDPSA uniquely excludes electric utilities, power generation companies, and retail electric providers, as defined under Section 31.002 of the Texas Utilities Code.
  • Certain categories of information are also excluded, including health information protected by HIPAA or used in connection with human clinical trials, and information covered by the Fair Credit Reporting Act, the Driver’s Privacy Protection Act, the Family Educational Rights and Privacy Act of 1974, the Farm Credit Act of 1971, emergency contact information used for emergency contact purposes, and data necessary to administer benefits.

Don’t Mess with Texas Consumers

Texas’s longstanding libertarian roots are evidenced in the TDPSA’s strong menu of individual consumer privacy rights, including the right to:

  • Confirm whether a controller is processing the consumer’s personal data and accessing that data;
  • Correct inaccuracies in the consumer’s personal data, considering the nature of the data and the purposes of the processing;
  • Delete personal data provided by or obtained about the consumer;
  • Obtain a copy of the consumer’s personal data that the consumer previously provided to a controller in a portable and readily usable format, if the data is available digitally and it is technically feasible; and
  • Opt-out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces legal or similarly significant legal effects concerning the consumer.

Data controllers are required to respond to consumer requests within 45 days, which may be extended by 45 days when reasonably necessary. The bill would also give consumers a right to appeal a controller’s refusal to respond to a request.

Controller Hospitality

The Texas bill imposes a number of obligations on data controllers, most of which are similar to other state consumer data privacy laws:

  • Data Minimization – Controllers should limit data collection to what is “adequate, relevant, and reasonably necessary” to achieve the purposes of collection that have been disclosed to a consumer. Consent is required before processing information in ways that are not reasonably necessary or not compatible with the purposes disclosed to a consumer.
  • Nondiscrimination – Controllers may not discriminate against a consumer for exercising individual rights under the TDPSA, including by denying goods or services, charging different rates, or providing different levels of quality.
  • Sensitive Data – Consent is required before processing sensitive data, which includes personal data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, citizenship or immigration status, genetic or biometric data processed for purposes of uniquely identifying an individual; personal data collected from a child known to be under the age of 13, and precise geolocation data.
    • The Senate version of the bill excludes data revealing “sexual orientation” from the categories of sensitive information, which differs from all other state consumer data privacy laws.
  • Privacy Notice – Controllers must post a privacy notice (e.g. website policy) that includes (1) the categories of personal data processed by the controller (including any sensitive data), (2) the purposes for the processing, (3) how consumers may exercise their individual rights under the Act, including the right of appeal, (4) any categories of personal data that the controller shares with third parties and the categories of those third parties, and (5) a description of the methods available to consumers to exercise their rights (e.g., website form or email address).
  • Targeted Advertising – A controller that sells personal data to third parties for purposes of targeted advertising must clearly and conspicuously disclose to consumers their right to opt-out.

Assessing the Privacy of Texans

Unlike some of the “business-friendly” privacy laws in Utah and Iowa, the Texas bill requires controllers to conduct data protection assessments (“Data Privacy Protection Assessments” or “DPPAs) for certain types of processing that pose heightened risks to consumers. The assessments must identify and weigh the benefits of the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the consumer as mitigated by any safeguards that could reduce those risks. In Texas, the categories that require assessments are identical to those required by Connecticut’s consumer data privacy law and include:

  • Processing personal data for targeted advertising;
  • The sale of personal data;
  • Processing personal data for profiling consumers, if such profiling presents a reasonably foreseeable risk to consumers of unfair or deceptive treatment, disparate impact, financial, physical or reputational injury, physical or other intrusion upon seclusion of private affairs, or “other substantial injury;”
  • Processing of sensitive data; and
  • Any processing activities involving personal data that present a “heightened risk of harm to consumers.”

Opting Out and About

Businesses are required to recognize a universal opt-out mechanism for consumers (or, Global Privacy Control signal), similar to provisions required in Colorado, Connecticut, California, and Montana, but it would also allow businesses more leeway to ignore those signals if it cannot verify the consumers’ identity or lacks the technical ability to receive it.

Show Me Some Swagger!

The Attorney General has the exclusive right to enforce the law, punishable by civil penalties of up to $7,500 per violation. Businesses have a 30-day right to cure violations upon written notice from the Attorney General. Unlike several other laws, the right to cure has no sunset provision and would remain a permanent part of the law. The law does not include a private right of action.

Next Steps for TDPSA Compliance

For businesses that have already developed a state privacy compliance program, especially those modeled around Colorado and Connecticut, making room for TDPSA will be a streamlined exercise. However, businesses that are starting from ground zero, especially “small businesses” defined in the law, need to get moving.

If TDPSA is your first ride in a state consumer privacy compliance rodeo, some first steps we recommend are:

  1. Update your website privacy policy for facial compliance with the law and make sure that notice is being given at or before the time of collection.
  2. Put procedures in place to respond to consumer privacy requests and ask for consent before processing sensitive information
  3. Gather necessary information to complete data protection assessments.
  4. Identify vendor contracts that should be updated with mandatory data protection terms.

Footnotes

[1] As of date of publication, there are now 17 states that have passed state consumer data privacy laws (California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Kentucky, Maryland, Massachusetts, Montana, New Jersey, New Hampshire, Tennessee, Texas, Utah, Virginia) and two (Vermont and Minnesota) that are pending.

[2] See, Code of Virginia Code – Chapter 53. Consumer Data Protection Act

[3] This is notably broader than other state privacy laws, which establish threshold requirements based on revenues or the amount of personal data that a business processes. It will also make it more difficult to know what businesses are covered because SBA definitions vary significantly from one industry vertical to another. As a quick rule of thumb, under the current SBA size standards, a U.S. business with annual average receipts of less than $2.25 million and fewer than 100 employees will likely be small, and therefore exempt from the TDPSA’s primary requirements.

For more news on State Privacy Laws, visit the NLR Consumer Protection and Communications, Media & Internet sections.

Bidding Farewell, For Now: Google’s Ad Auction Class Certification Victory

A federal judge in the Northern District of California delivered a blow to a potential class action lawsuit against Google over its ad auction practices. The lawsuit, which allegedly involved tens of millions of Google account holders, claimed Google’s practices in its real-time bidding (RTB) auctions violated users’ privacy rights. But U.S. District Judge Yvonne Gonzalez Rogers declined to certify the class of consumers, pointing to deficiencies in the plaintiffs’ proposed class definition.

According to plaintiffs, Google’s RTB auctions share highly specific personal information about individuals with auction participants, including device identifiers, location data, IP addresses, and unique demographic and biometric data, including age and gender. This, the plaintiffs argued, directly contradicted Google’s promises to protect users’ data. The plaintiffs therefore proposed a class definition that included all Google account holders subject to the company’s U.S. terms of service whose personal information was allegedly sold or shared by Google in its ad auctions after June 28, 2016.

But Google challenged this definition on the basis that it “embed[ded] the concept of personal information” and therefore subsumed a dispositive issue on the merits, i.e., whether Google actually shared account holders’ personal information. Google argued that the definition amounted to a fail-safe class since it would include even uninjured members. The Court agreed. As noted by Judge Gonzalez Rogers, Plaintiffs’ broad class definition included a significant number of potentially uninjured class members, thus warranting the denial of their certification motion.

Google further argued that merely striking the reference to “personal information,” as proposed by plaintiffs, would not fix this problem. While the Court acknowledged this point, it concluded that it did not yet have enough information to make that determination. Because the Court denied plaintiffs’ certification motion with leave to amend, it encouraged the parties to address these concerns in any subsequent rounds of briefing.

In addition, Judge Gonzalez raised that plaintiffs would need to demonstrate that the RTB data produced in the matter thus far was representative of the class as a whole. While the Court agreed with plaintiffs’ argument and supporting evidence that Google “share[d] so much information about named plaintiffs that its RTB data constitute[d] ‘personal information,” Judge Gonzalez was not persuaded by their assertion that the collected RTB data would necessarily also provide common evidence for the rest of the class. The Court thus determined that plaintiffs needed to affirmatively demonstrate through additional evidence that the RTB data was representative of all putative class members, and noted for Google that it could not refuse to provide such and assert that plaintiffs had not met their burden as a result.

This decision underscores the growing complexity of litigating privacy issues in the digital age, and previews new challenges plaintiffs may face in demonstrating commonality and typicality among a proposed class in privacy litigation. The decision is also instructive for modern companies that amass various kinds of data insofar as it demonstrates that seemingly harmless pieces of that data may, in the aggregate, still be traceable to specific persons and thus qualify as personally identifying information mandating compliance with the patchwork of privacy laws throughout the U.S.

Protect Yourself: Action Steps Following the Largest-Ever IRS Data Breach

On January 29, 2024, Charles E. Littlejohn was sentenced to five years in prison for committing one of the largest heists in the history of the federal government. Littlejohn did not steal gold or cash, but rather, confidential data held by the Internal Revenue Service (IRS) concerning the United States’ wealthiest individuals and families.

Last week, more than four years after Littlejohn committed his crime, the IRS began notifying affected taxpayers that their personal data had been compromised. If you received a notice from the IRS, it means you are a victim of the data breach and should take proactive steps to protect yourself from fraud.

IN DEPTH


Littlejohn’s crime is the largest known data theft in the history of the IRS. He pulled it off while working for the IRS in 2020, using his access to IRS computer systems to illegally copy tax returns (and documents attached to those tax returns) filed by thousands of the wealthiest individuals in the United States and entities in which they have an interest. Upon obtaining these returns, Littlejohn sent them to ProPublica, an online nonprofit newsroom, which published more than 50 stories using the data.

Under federal law, the IRS was required to notify each taxpayer affected by the data breach “as soon as practicable.” However, the IRS did not send notifications to the affected taxpayers until April 12, 2024 – more than four years after the data breach occurred, and months after Littlejohn’s sentencing hearing.

TAKE ACTION

If you received a letter from the IRS (Letter 6613-A) enclosing a copy of the criminal charges against Littlejohn, it means you were a victim of his illegal actions. To protect yourself from this unprecedented breach of the public trust, we recommend the following actions:

  1. Consider Applying for an Identity Protection PIN. A common crime following data theft involves using a taxpayer’s social security number to file fraudulent tax returns requesting large refunds. An Identity Protection PIN (IP PIN) can help protect you from this scheme. After you obtain an IP PIN, criminals cannot file an income tax return under your name without knowing your identification number, which changes annually. Learn more and apply for an IP PIN here.
  2. Request and Review Your Tax Transcript. The IRS maintains a transcript of all your tax-related matters, including filings, payments, refunds, extensions and official notices. Regularly reviewing your tax transcript (e.g., every six to 12 months) can reveal fraudulent activity while there is still time to take remedial action. Request a copy of your tax transcript here. If you have questions about your transcript or need help obtaining it, we are available to assist you.
  3. Obtain Identity Protection Monitoring Services. Applying for an IP PIN and regularly reviewing your tax transcript will help protect you from tax fraud, but it will not protect you from other criminal activities, such as fraudulent loan applications. To protect yourself from these other risks, you should obtain identity protection monitoring services from a reputable provider.
  4. Evaluate Legal Action. Data breach victims should consider taking legal action against Littlejohn, the IRS and anyone else complicit in his wrongdoing. Justifiably, most victims will not want to suffer the cost, aggravation and publicity of litigation, but for those concerned with the public tax system’s integrity, litigation is an option.

In fact, litigation against the IRS is already underway. On December 13, 2022, Kenneth Griffin, the founder and CEO of Citadel, filed a lawsuit against the IRS in the US District Court for the Southern District of Florida after discovering his personal tax information was unlawfully disclosed to ProPublica. In his complaint, Griffin alleges that the IRS willfully failed to establish adequate safeguards over confidential tax return information – notwithstanding repeated warnings from the Treasury Inspector General for Tax Administration and the US Government Accountability Office that the IRS’s existing systems were wholly inadequate. Griffin is seeking an order directing the IRS “to formulate, adopt, and implement a data security plan” to protect taxpayer information.

The future of Griffin’s lawsuit is uncertain. Recently, the judge in his case dismissed one of his two claims and cast doubt on the theories underpinning his remaining claim. It could be years before a final decision is entered.

Although Griffin is leading the charge, joining the fight would bolster his efforts and promote the goal of ensuring the public tax system’s integrity. A final order in Griffin’s case will be appealable to the US Court of Appeals for the Eleventh Circuit. A decision there will be binding on both the IRS and taxpayers who live in Alabama, Florida and Georgia. However, the IRS could also be bound by orders entered by other federal courts arising from lawsuits filed by taxpayers who live elsewhere. Because other courts may disagree with the Eleventh Circuit, taxpayers living in other states could file their own lawsuits against the IRS in case Griffin does not prevail.

Victims of the IRS data breach who are interested in taking legal action should act quickly. Under the Internal Revenue Code, a lawsuit must be filed within two years after the date the taxpayer discovered the data breach.

Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

The Increasing Role of Cybersecurity Experts in Complex Legal Disputes

The testimonies and guidance of expert witnesses have been known to play a significant role in high-stakes legal matters, whether it be the opinion of a clinical psychiatrist in a homicide case or that of a career IP analyst in a patent infringement trial. However, in today’s highly digital world—where cybercrimes like data breaches and theft of intellectual property are increasingly commonplace—cybersecurity professionals have become some of the most sought-after experts for a broadening range of legal disputes.

Below, we will explore the growing importance of cybersecurity experts to the litigation industry in more depth, including how their insights contribute to case strategies, the challenges of presenting technical and cybersecurity-related arguments in court, the specific qualifications that make an effective expert witness in the field of cybersecurity, and the best method for securing that expertise for your case.

How Cybersecurity Experts Help Shape Legal Strategies

Disputes involving highly complex cybercrimes typically require more technical expertise than most trial teams have on hand, and the contributions of a qualified cybersecurity expert can often be transformative to your ability to better understand the case, uncover critical evidence, and ultimately shape your overall strategy.

For example, in the case of a criminal data breach, defense counsel might seek an expert witness to analyze and evaluate the plaintiff’s existing cybersecurity policies and protective mechanisms at the time of the attack to determine their effectiveness and/or compliance with industry regulations or best practices. Similarly, an expert with in-depth knowledge of evolving data laws, standards, and disclosure requirements will be well-suited to determining a party’s liability in virtually any matter involving the unauthorized access of protected information. Cybersecurity experts are also beneficial during the discovery phase when their experience working with certain systems can assist in potentially uncovering evidence related to a specific attack or breach that may have been initially overlooked.

We have already seen many instances in which the testimony and involvement of cybersecurity experts have impacted the overall direction of a legal dispute. Consider the Coalition for Good Governance, for example, that recently rested its case(Opens an external site in a new window) as the plaintiffs in a six-year battle with the state of Georgia over the security of touchscreen voting machines. Throughout the process, the organization relied heavily on the testimony of multiple cybersecurity experts who claimed they identified vulnerabilities in the state’s voting technology. If these testimonies prove effective, it will not only sway the ruling in the favor of the plaintiffs but also lead to entirely new policies and impact the very way in which Georgia voters cast their ballots as early as this year.

The Challenges of Explaining Cybersecurity in the Courtroom

While there is no denying the growing importance of cybersecurity experts in modern-day disputes, it is also important to note that many challenges still exist in presenting highly technical arguments and/or evidence in a court of law.

Perhaps most notably, there remains a significant gap in both legal and technological language, as well as in the knowledge and understanding of cybersecurity professionals and judges, lawyers, and the juries tasked with parsing particularly dense information. In other words, today’s trial teams need to work carefully with cybersecurity experts to develop communication strategies that adequately illustrate their arguments but do not result in unnecessary confusion or a misunderstanding of the evidence being presented. Visuals are a particularly useful tool in helping both litigators and experts explain complex topics while also engaging decision-makers.

Depending on the nature of the data breach or cybercrime in question, you may be tasked with replicating a digital event to support your specific argument. In many cases, this can be incredibly challenging due to the evolving and multifaceted nature of modern cyberattacks, and it may require extensive resources within the time constraints of a given matter. Thus, it is wise to use every tool at your disposal to boost the power of your team—including custom expert witness sourcing and visual advocacy consultants.

What You Should Look for in a Cybersecurity Expert

Determining the qualifications of a cybersecurity expert is highly dependent on the details of each individual case, making it critical to identify an expert whose experience reflects your precise needs. For example, a digital forensics specialist will offer an entirely different skill set than someone with a background in data privacy regulations and compliance.

Making sure an expert has the relevant professional experience to assess your specific cybersecurity case is only one factor to consider. In addition to verifying education and professional history, you must also assess the expert’s experience in the courtroom and familiarity with relevant legal processes. Similarly, expert witnesses should be evaluated based on their individual personality and communication skills, as they will be tasked with conveying highly technical arguments to an audience that will likely have a difficult time understanding all relevant concepts in the absence of clear, simplified explanations.

Where to Find the Most Qualified Cybersecurity Experts

Safeguarding the success of your client or firm in the digital age starts with the right expertise. You need to be sure your cybersecurity expert is uniquely suited to your case and primed to share critical insights when the stakes are high.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

U.S. House of Representatives Passes Bill to Ban TikTok Unless Divested from ByteDance

Yesterday, with broad bipartisan support, the U.S. House of Representatives voted overwhelmingly (352-65) to support the Protecting Americans from Foreign Adversary Controlled Applications Act, designed to begin the process of banning TikTok’s use in the United States. This is music to my ears. See a previous blog post on this subject.

The Act would penalize app stores and web hosting services that host TikTok while it is owned by Chinese-based ByteDance. However, if the app is divested from ByteDance, the Act will allow use of TikTok in the U.S.

National security experts have warned legislators and the public about downloading and using TikTok as a national security threat. This threat manifests because the owner of ByteDance is required by Chinese law to share users’ data with the Chinese Communist government. When downloading the app, TikTok obtains access to users’ microphones, cameras, and location services, which is essentially spyware on over 170 million Americans’ every move, (dance or not).

Lawmakers are concerned about the detailed sharing of Americans’ data with one of its top adversaries and the ability of TikTok’s algorithms to influence and launch disinformation campaigns against the American people. The Act will make its way through the Senate, and if passed, President Biden has indicated that he will sign it. This is a big win for privacy and national security.

Copyright © 2024 Robinson & Cole LLP. All rights reserved.
by: Linn F. Freedman of Robinson & Cole LLP

For more news on Social Media Legislation, visit the NLR Communications, Media & Internet section.

President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article