Mid-Year Recap: Think Beyond US State Laws!

Much of the focus on US privacy has been US state laws, and the potential of a federal privacy law. This focus can lead one to forget, however, that US privacy and data security law follows a patchwork approach both at a state level and a federal level. “Comprehensive” privacy laws are thus only one piece of the puzzle. There are federal and state privacy and security laws that apply based on a company’s (1) industry (financial services, health care, telecommunications, gaming, etc.), (2) activity (making calls, sending emails, collecting information at point of purchase, etc.), and (3) the type of individual from whom information is being collected (children, students, employees, etc.). There have been developments this year in each of these areas.

On the industry law, there has been activity focused on data brokers, those in the health space, and for those that sell motor vehicles. The FTC has focused on the activities of data brokers this year, beginning the year with a settlement with lead-generation company Response Tree. It also settled with X-Mode Social over the company’s collection and use of sensitive information. There have also been ongoing regulation and scrutiny of companies in the health space, including HHS’s new AI transparency rule. Finally, in this area is a new law in Utah, with a Motor Vehicle Data Protection Act applicable to data systems used by car dealers to house consumer information.

On the activity side, there has been less news, although in this area the “activity” of protecting information (or failing to do so) has continued to receive regulatory focus. This includes the SEC’s new cybersecurity reporting obligations for public companies, as well as minor modifications to Utah’s data breach notification law.

Finally, there have been new laws directed to particular individuals. In particular, laws intended to protect children. These include social media laws in Florida and Utah, effective January 1, 2025 and October 1, 2024 respectively. These are similar to attempts to regulate social media’s collection of information from children in Arkansas, California, Ohio and Texas, but the drafters hope sufficiently different to survive challenges currently being faced by those laws. The FTC is also exploring updates to its decades’ old Children’s Online Privacy Protection Act.

Putting It Into Practice: As we approach the mid-point of the year, now is a good time to look back at privacy developments over the past six months. There have been many developments in the privacy patchwork, and companies may want to take the time now to ensure that their privacy programs have incorporated and addressed those laws’ obligations.

Listen to this post

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

Congress Introduces Promising Bipartisan Privacy Bill

U.S. Senator Maria Cantwell (D-WA) and U.S. Representative Cathy McMorris Rodgers (R-WA) have made a breakthrough by agreeing on a bipartisan data privacy legislation proposal. The legislation aims to address concerns related to consumer data collection by technology companies and empower individuals to have control over their personal information.

The proposed legislation aims to restrict the amount of data technology companies can gather from consumers. This step is particularly important given the large amount of data these technology companies possess. It would grant Americans the authority to prevent the sale of their personal information or request its deletion. This step gives individuals more control over their personal data. The Federal Trade Commission (FTC) and state attorneys general would be given significant authority to monitor and regulate matters related to consumer privacy. This measure will ensure that the government has a say in matters associated with consumer privacy. The bill includes robust enforcement measures, such as granting individuals the right to take legal action. This step is necessary to ensure that any violations of the legislation are dealt with effectively. While targeted advertising would not be prohibited, the proposed legislation would allow consumers to opt out of it. This step gives consumers more control over the ads they receive. The privacy violations listed in the legislation would also be applicable to telecommunications companies. This measure ensures that no company is exempt from consumer privacy laws. Annual assessments of algorithms would be conducted to ensure that they do not harm individuals, particularly young people. This is an important, step given the rise of technology and its impact on consumers, especially among younger generations.

The bipartisan proposal for data privacy legislation is a positive step forward in terms of consumer privacy in America. While there is still work to be done, it is essential that the government takes proactive steps to ensure that individuals have greater control over their personal data. This is a positive development for the tech industry and consumers alike.

However, as we reported on before, this is not the first time Congress has made strides towards comprehensive data privacy legislation,). Hopefully, this new bipartisan bill will enjoy more success than past efforts and bring the United States closer in line with international data privacy standards.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

The Increasing Role of Cybersecurity Experts in Complex Legal Disputes

The testimonies and guidance of expert witnesses have been known to play a significant role in high-stakes legal matters, whether it be the opinion of a clinical psychiatrist in a homicide case or that of a career IP analyst in a patent infringement trial. However, in today’s highly digital world—where cybercrimes like data breaches and theft of intellectual property are increasingly commonplace—cybersecurity professionals have become some of the most sought-after experts for a broadening range of legal disputes.

Below, we will explore the growing importance of cybersecurity experts to the litigation industry in more depth, including how their insights contribute to case strategies, the challenges of presenting technical and cybersecurity-related arguments in court, the specific qualifications that make an effective expert witness in the field of cybersecurity, and the best method for securing that expertise for your case.

How Cybersecurity Experts Help Shape Legal Strategies

Disputes involving highly complex cybercrimes typically require more technical expertise than most trial teams have on hand, and the contributions of a qualified cybersecurity expert can often be transformative to your ability to better understand the case, uncover critical evidence, and ultimately shape your overall strategy.

For example, in the case of a criminal data breach, defense counsel might seek an expert witness to analyze and evaluate the plaintiff’s existing cybersecurity policies and protective mechanisms at the time of the attack to determine their effectiveness and/or compliance with industry regulations or best practices. Similarly, an expert with in-depth knowledge of evolving data laws, standards, and disclosure requirements will be well-suited to determining a party’s liability in virtually any matter involving the unauthorized access of protected information. Cybersecurity experts are also beneficial during the discovery phase when their experience working with certain systems can assist in potentially uncovering evidence related to a specific attack or breach that may have been initially overlooked.

We have already seen many instances in which the testimony and involvement of cybersecurity experts have impacted the overall direction of a legal dispute. Consider the Coalition for Good Governance, for example, that recently rested its case(Opens an external site in a new window) as the plaintiffs in a six-year battle with the state of Georgia over the security of touchscreen voting machines. Throughout the process, the organization relied heavily on the testimony of multiple cybersecurity experts who claimed they identified vulnerabilities in the state’s voting technology. If these testimonies prove effective, it will not only sway the ruling in the favor of the plaintiffs but also lead to entirely new policies and impact the very way in which Georgia voters cast their ballots as early as this year.

The Challenges of Explaining Cybersecurity in the Courtroom

While there is no denying the growing importance of cybersecurity experts in modern-day disputes, it is also important to note that many challenges still exist in presenting highly technical arguments and/or evidence in a court of law.

Perhaps most notably, there remains a significant gap in both legal and technological language, as well as in the knowledge and understanding of cybersecurity professionals and judges, lawyers, and the juries tasked with parsing particularly dense information. In other words, today’s trial teams need to work carefully with cybersecurity experts to develop communication strategies that adequately illustrate their arguments but do not result in unnecessary confusion or a misunderstanding of the evidence being presented. Visuals are a particularly useful tool in helping both litigators and experts explain complex topics while also engaging decision-makers.

Depending on the nature of the data breach or cybercrime in question, you may be tasked with replicating a digital event to support your specific argument. In many cases, this can be incredibly challenging due to the evolving and multifaceted nature of modern cyberattacks, and it may require extensive resources within the time constraints of a given matter. Thus, it is wise to use every tool at your disposal to boost the power of your team—including custom expert witness sourcing and visual advocacy consultants.

What You Should Look for in a Cybersecurity Expert

Determining the qualifications of a cybersecurity expert is highly dependent on the details of each individual case, making it critical to identify an expert whose experience reflects your precise needs. For example, a digital forensics specialist will offer an entirely different skill set than someone with a background in data privacy regulations and compliance.

Making sure an expert has the relevant professional experience to assess your specific cybersecurity case is only one factor to consider. In addition to verifying education and professional history, you must also assess the expert’s experience in the courtroom and familiarity with relevant legal processes. Similarly, expert witnesses should be evaluated based on their individual personality and communication skills, as they will be tasked with conveying highly technical arguments to an audience that will likely have a difficult time understanding all relevant concepts in the absence of clear, simplified explanations.

Where to Find the Most Qualified Cybersecurity Experts

Safeguarding the success of your client or firm in the digital age starts with the right expertise. You need to be sure your cybersecurity expert is uniquely suited to your case and primed to share critical insights when the stakes are high.

FCC Updated Data Breach Notification Rules Go into Effect Despite Challenges

On March 13, 2024, the Federal Communications Commission’s updates to the FCC data breach notification rules (the “Rules”) went into effect. They were adopted in December 2023 pursuant to an FCC Report and Order (the “Order”).

The Rules went into effect despite challenges brought in the United States Court of Appeals for the Sixth Circuit. Two trade groups, the Ohio Telecom Association and the Texas Association of Business, petitioned the United States Court of Appeals for the Sixth Circuit and Fifth Circuit, respectively, to vacate the FCC’s Order modifying the Rules. The Order was published in the Federal Register on February 12, 2024, and the petitions were filed shortly thereafter. The challenges, which the United States Panel on Multidistrict Litigation consolidated to the Sixth Circuit, argue that the Rules exceed the FCC’s authority and are arbitrary and capricious. The Order addresses the argument that the Rules are “substantially the same” as breach rules nullified by Congress in 2017. The challenges, however, have not progressed since the Rules went into effect.

Read our previous blog post to learn more about the Rules.

Listen to this post

2023 Cybersecurity Year In Review

2023 was another busy year in the realm of data event and cybersecurity litigations, with several noteworthy developments in the realm of disputes and regulator activity. Privacy World has been tracking these developments throughout the year. Read on for key trends and what to expect going into the 2024.

Growth in Data Events Leads to Accompanying Increase in Claims

The number of reportable data events in the U.S. in 2023 reached an all-time high, surpassing the prior record set in 2021. At bottom, threat actors continued to target entities across industries, with litigation frequently following disclosure of data events. On the dispute front, 2023 saw several notable cybersecurity consumer class actions concerning the alleged unauthorized disclosure of sensitive personal information, including healthcare, genetic, and banking information. Large putative class actions in these areas included, among others, lawsuits against the hospital system HCA Healthcare (estimated 11 million individuals involved in the underlying data event), DNA testing provider 23andMe (estimated 6.9 million individuals involved in the underlying data event), and mortgage business Mr. Cooper (estimated 14.6 million individuals involved in the underlying data event).

JPML Creates Several Notable Cybersecurity MDLs

In 2023 the Judicial Panel on Multidistrict Litigation (“JPML”) transferred and centralized several data event and cybersecurity putative class actions. This was a departure from prior years in which the JPML often declined requests to consolidate and coordinate pretrial proceedings in the wake of a data event. By way of example, following the largest data breach of 2023—the MOVEit hack affecting at least 55 million people—the JPML ordered that dozens of class actions regarding MOVEit software be consolidated for pretrial proceedings in the District of Massachusetts. Other data event litigations similarly received the MDL treatment in 2023, including litigations against SamsungOverby-Seawell Company, and T‑Mobile.

Significant Class Certification Rulings

Speaking of the development of precedent, 2023 had two notable decisions addressing class certification. While they arose in the cybersecurity context, these cases have broader applicability in other putative class actions. Following a remand from the Fourth Circuit, a judge in Maryland (in a MDL) re-ordered the certification of eight classes of consumers affected by a data breach suffered by Mariott. See In Re: Marriott International, Inc., Customer Data Security Breach Litigation,No. 8:19-md-02879, 2023 WL 8247865 (D. Md. Nov. 29, 2023). As explained here on PW, the court held that a class action waiver provision in consumers’ contracts did not require decertification because (1) Marriott waived the provision by requesting consolidation of cases in an MDL outside of the contract’s chosen venue, (2) the class action waiver was unconscionable and unenforceable, and (3) contractual provisions cannot override a court’s authority to certify a class under Rule 23.

The second notable decision came out of the Eleventh Circuit, where the Court of Appeals vacated a district court’s certification of a nationwide class of restaurant customers in a data event litigation. See Green-Cooper v. Brinker Int’l, Inc., No. 21-13146, 73 F. 4th 883 (11th Cir. July 11, 2023). In a 2-1 decision, a majority of the Court held that only one of the three named plaintiffs had standing under Article III of the U.S. Constitution, and remanded to the district court to reassess whether the putative class satisfied procedural requirements for a class. The two plaintiffs without standing dined at one of the defendant’s restaurants either before or after the time period that the restaurant was impacted by the data event, which the Fourth Circuit held to mean that any injury the plaintiffs suffered could not be traced back to defendant.

Standing Challenges Persist for Plaintiffs in Data Event and Cybersecurity Litigations

Since the Supreme Court’s TransUnion decision in 2021, plaintiffs in data breach cases have continued to face challenges getting into or staying in federal court, and opinions like Brinker reiterate that Article III standing issues are relevant at every stage in litigation, including class certification. See, also, e.g.Holmes v. Elephant Ins. Co., No. 3:22-cv-00487, 2023 WL 4183380 (E.D. Va. June 26, 2023) (dismissing class action complaint alleging injuries from data breach for lack of standing). Looking ahead to 2024, it is possible that more data litigation plays out in state court rather than federal court—particularly in the Eleventh Circuit but also elsewhere—as a result.

Cases Continue to Reach Efficient Pre-Trial Resolution

Finally in the dispute realm, several large cybersecurity litigations reached pre-trial resolutions in 2023. The second-largest data event settlement ever—T-Mobile’s $350 million settlement fund with $150 million in data spend—received final approval from the trial court. And software company Blackbaud settled claims relating to a 2020 ransomware incident with 49 states Attorneys General and the District of Columbia to the tune of $49.5 million. Before the settlement, Blackbaud was hit earlier in the year with a $3 million fine from the Securities and Exchange Commission. The twin payouts by Blackbaud are cautionary reminders that litigation and regulatory enforcement on cyber incidents often go-hand-in-hand, with multifaceted risks in the wake of a data event.

FTC and Cybersecurity

Regulators were active on the cybersecurity front in 2023, as well. Following shortly after a policy statement by the Health and Human Resources Office of Civil Rights policy Bulletin on use of trackers in compliance with HIPAA, the FTC announced settlement of enforcement actions against GoodRxPremom, and BetterHelp for sharing health data via tracking technologies with third parties resulting in a breach of Personal Health Records under the Health Breach Notification Rule. The FTC also settled enforcement actions against Chegg and Drizly for inadequate cybersecurity practices which led to data breaches. In both cases, the FTC faulted the companies for failure to implement appropriate cybersecurity policies and procedures, access controls, and securely store access credentials for company databases (among other issues).

Notably, in Drizly matter, the FTC continued ta trend of holding corporate executives responsible individually for his failure to implement “or properly delegate responsibility to implement, reasonable information security practices.” Under the consent decree, Drizly’s CEO must implement a security program (either at Drizly or any company to which he might move that processes personal information of 25,000 or more individuals and where he is a majority owner, CEO, or other senior officer with information security responsibilities).

SEC’s Focus on Cyber Continues

The SEC was also active in cybersecurity. In addition to the regulatory enforcement action against Blackbaud mentioned above, the SEC initiated an enforcement action against a software company for a cybersecurity incident disclosed in 2020. In its complaint, the SEC alleged that the company “defrauded…investors and customers through misstatements, omissions, and schemes that concealed both the Company’s poor cybersecurity practices and its heightened—and increasing—cybersecurity risks” through its public statements regarding its cybersecurity practices and risks. Like the Drizly matter, the SEC charged a senior company executive individually—in this case, the company’s CISO—for concealing the cybersecurity deficiencies from investors. The matter is currently pending. These cases reinforce that regulators will continue to hold senior executives responsible for oversight and implementation of appropriate cybersecurity programs.

Notable Federal Regulatory Developments

Regulators were also active in issuing new regulations on the cybersecurity front in 2023. In addition to its cybersecurity regulatory enforcement actions, the FTC amended the GLBA Safeguards Rule. Under the amended Rule, non-bank financial institutions must provide notice to notify the FTC as soon as possible, and no later than 30 days after discovery, of any security breach involving the unencrypted information of 500 or more consumers.

Additionally, in March 2024, the SEC proposed revisions to Regulation S-P, Rule 10 and form SCIR, and Regulation SCI aimed at imposing new incident reporting and cybersecurity program requirements for various covered entities. You can read PW’s coverage of the proposed amendments here. In July, the SEC also finalized its long-awaited Cybersecurity Risk Management and Incident Disclosure Regulations. Under the final Regulations, public companies are obligated to report regarding material cybersecurity risks, cybersecurity risk management and governance, and board of directors’ oversight of cybersecurity risks in their annual 10-K reports. Additionally, covered entities are required to report material cybersecurity incidents within four business days of determining materiality. PW’s analysis of the final Regulations are here.

New State Cybersecurity Regulations

The New York Department of Financial Services also finalized amendments to its landmark Cybersecurity Regulations in 2023. In the amended Regulations, NYDFS creates a new category of companies subject to heightened cybersecurity standards: Class A Companies. These heightened cybersecurity standards would apply only to the largest financial institutions (i.e., entities with at least $20 million in gross annual revenues over the last 2 fiscal years, and either (1) more than 2,000 employees; or (2) over $1 billion in gross annual revenue over the last 2 fiscal years). The enhanced requirements include independent cybersecurity audits, enhanced privileged access management controls, and endpoint detection and response with centralized logging (unless otherwise approved in writing by the CISO). New cybersecurity requirements for other covered entities include annual review and approval of company cybersecurity policy by a senior officer or the senior governing body (i.e., board of directors), CISO reporting to the senior governing body, senior governing body oversight, and access controls and privilege management, among others. PW’s analysis of the amended NYDFS Cybersecurity Regulations is here.

On the state front, California Privacy Protection Agency issued draft cybersecurity assessment regulations as required by the CCPA. Under the draft regulations, if a business’s “processing of consumers’ personal information presents significant risk to consumers’ security”, that business must conduct a cybersecurity audit. If adopted as proposed, companies that process a (yet undetermined) threshold number of items of personal information, sensitive personal information, or information regarding consumers under 16, as well as companies that exceed a gross revenue threshold will be considered “high risk.” The draft regulations outline detailed criteria for evaluating businesses’ cybersecurity program and documenting the audit. The draft regulations anticipate that the audit results will be reported to the business’s board of directors or governing body and that a representative of that body will certify that the signatory has reviewed and understands the findings of the audit. If adopted, businesses will be obligated to certify compliance with the audit regulations to the CPPA. You can read PW’s analysis of the implications of the proposed regulations here.

Consistent with 2023 enforcement priorities, new regulations issued this year make clear that state and federal regulators are increasingly holding senior executives and boards of directors responsible for oversight of cybersecurity programs. With regulations explicitly requiring oversight of cybersecurity risk management, the trend toward holding individual executives responsible for egregious cybersecurity lapses is likely to continue into 2024 and beyond.

Looking Forward

2023 demonstrated “the more things change, the more they stay the same.” Cybersecurity litigation trends were a continuation the prior two years. Something to keep an eye on in 2024 remains the potential for threatened individual officer and director liability in the wake of a widespread cyberattack. While the majority of cybersecurity litigations filed continue to be brought on behalf of plaintiffs whose personal information was purportedly disclosed, shareholders and regulators will increasingly look to hold executives responsible for failing to adopt reasonable security measures to prevent cyberattacks in the first instance.

Needless to say, 2024 should be another interesting year on the cybersecurity front. This is particularly so for data event litigations and for data developments more broadly.

For more news on Data Event and Cybersecurity Litigations in 2023, visit the NLR Communications, Media & Internet section.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

2024 Regulatory Update for Investment Advisers

In 2023, the Securities and Exchange Commission issued various proposed rules on regulatory changes that will affect SEC-registered investment advisers (RIAs). Since these rules are likely to be put into effect, RIAs should consider taking preliminary steps to start integrating the new requirements into their compliance policies and procedures.

1. Updates to the Custody Rule

The purpose of the custody rule, rule 206(4)-2 of the Investment Advisers Act of 1940 (Advisers Act), is to protect client funds and securities from potential loss and misappropriation by custodians. The SEC’s recommended updates to the custody rule would:

  • Expand the scope of the rule to not only include client funds and securities but all of a client’s assets over which an RIA has custody
  • Expand the definition of custody to include discretionary authority
  • Require RIAs to enter into written agreements with qualified custodians, including certain reasonable assurances regarding protections of client assets

2. Internet Adviser Exemption

The SEC also proposed to modernize rule 203A-2(e) of the Advisers Act, whose purpose is to permit internet investment advisers to register with the SEC even if such advisers do not meet the other statutory requirements for SEC registration. Under the proposed rule:

  • Advisers relying on this exemption would at all times be required to have an operational interactive website through which the adviser provides investment advisory services
  • The de minimis exception would be eliminated, hence requiring advisers relying on rule 203A-2(e) to provide advice to all of their clients exclusively through an operational interactive website

3. Conflicts of Interest Related to Predictive Data Analytics and Similar Technologies

The SEC proposes new rules under the Adviser’s Act to regulate RIAs’ use of technologies that optimize for, predict, guide, forecast or direct investment-related behaviors or outcomes. Specifically, the new rules aim to minimize the risk that RIAs could prioritize their own interest over the interests of their clients when designing or using such technology. The new rules would require RIAs:

  • To evaluate their use of such technologies and identify and eliminate, or neutralize the effect of, any potential conflicts of interest
  • To adopt written policies and procedures to prevent violations of the rule and maintain books and records relating to their compliance with the new rules

4. Cybersecurity Risk Management and Outsourcing to Third Parties

The SEC has yet to issue a final rule on the 2022 proposed new rule 206(4)-9 to the Adviser’s Act which would require RIAs to adequately address cybersecurity risks and incidents. Similarly, the SEC still has to issue the final language for new rule 206(4)-11 that would establish oversight obligations for RIAs that outsource certain functions to third parties. A summary of the proposed rules can be found here: 2023 Regulatory Update for Investment Advisers: Miller Canfield

5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.