Washington Shake-Up: Vice President Harris to Lead Democratic Nomination for 2024 Presidency

Following President Biden’s withdrawal from the 2024 presidential race on Sunday, the nation’s capital has experienced another political shock, leading to swift mobilization within the Democratic Party. President Biden quickly endorsed Vice President (VP) Kamala Harris as the Democratic nominee, triggering a rapid wave of support from Congressional leaders, governors, stakeholders, and party donors including former Speaker Nancy Pelosi (D-CA), Senate Majority Leader Chuck Schumer (D-NY), House Democratic Leader Hakeem Jeffries (D-NY), all 24 Democratic governors, EMILYs List, and the United Auto Workers.

VP Harris has secured enough backing from Democratic delegates to clinch her party’s nomination to challenge former president Donald Trump in November. With the election a little over 100 days away, we have highlighted VP Harris’ stance on key issues during her tenure in Congress and her 2020 Presidential bid.

Technology

VP Harris is very familiar with the tech industry due to her roots in Silicon Valley as San Francisco’s district attorney, and her subsequent roles as Attorney General and US Senator from California. Although she hasn’t called for the breakup of big tech like some of her former colleagues in the Senate, she has criticized tech CEOs for the data privacy practices and targeted advertising tactics that their companies deploy, and voiced support for general regulation of big tech firms. In the White House, she serves as President Biden’s lead on AI initiatives and has actively promoted policies aimed at mitigating AI risks such as algorithmic bias, disinformation, and privacy concerns, while maximizing its benefits for Americans.

Climate Change

VP Harris has a long history of challenging the oil industry for its role in pollution and is likely to take it a step further than President Biden in tackling climate change. In the 2020 Presidential race, Harris proposed a $10 trillion climate plan aimed at achieving a carbon-neutral US economy by 2045, featuring initiatives such as a climate pollution fee and the elimination of fossil fuel subsidies.

In the Senate, Harris authored legislation that would have authorized grants to fund projects that address the specific climate-related challenges faced by vulnerable communities and invest in critical upgrades to the nation’s water infrastructure.

As California’s attorney general, VP Harris brought lawsuits against major oil companies, including British Petroleum (BP) for failing to stop underground storage tanks from leaking gasoline at 800 sites across the state, and also filed an investigation into ExxonMobil over its climate change disclosures.

Health Care

Maternal health was at the forefront of Harris’ health care priorities during her tenure in the Senate and has continued in her current role as Vice President. She sponsored landmark legislation such as the Black Maternal Health Momnibus Act, aimed at tackling the crisis facing Black maternal health care. This legislation enhances data collection, expands access to prenatal, postpartum, and doula care in underserved communities, promotes implicit bias training for health care professionals, and funds research and innovation to improve health outcomes and reduce disparities for Black women. Although the bill was not enacted, it remained a priority in both chambers of Congress after Harris’ departure from the Senate. It is also the centerpiece bill of the Congressional Black Maternal Health Caucus. Harris also championed legislation aimed at addressing the impact of uterine fibroids on women’s health through initiatives such as research funding, patient support tactics, and health care provider training. Additionally, she supported legislation to establish a loan repayment program for mental health professionals working in areas with critical workforce shortages.

In her 2020 presidential campaign, Harris introduced a health care plan that proposed a gradual transition toward Medicare-for-All over a decade. Her plan allowed individuals and employers to initially buy into Medicare while maintaining strict regulations for private insurance options. She also consistently opposed efforts to restrict access to reproductive health care services.

Tax

With numerous tax provisions under former President Trump’s Tax Cuts and Jobs Act set to expire in 2025, all eyes are on VP Harris’ anticipated tax policy proposals. During her tenure in Congress, she championed a significant tax reform bill that would have introduced the LIFT credit—a refundable tax credit of $3,000 for single filers and $6,000 for married couples—benefiting a large portion of middle- and working-class Americans. Unlike the Earned Income Tax Credit (EITC), this credit’s amount would not depend on the number of children reported on a taxpayer’s return but would phase out as income increased. Harris emphasized that this credit aimed to boost families’ after-tax income to help them cope with rising living costs.

Additionally, she sponsored legislation in Congress aimed at protecting workers from harassment and discrimination, funding earthquake mitigation efforts, and providing housing assistance to low-income families. During her 2020 presidential campaign, Harris advocated strongly for repealing Trump’s tax law. She proposed implementing a financial transaction tax to expand Medicare coverage and advocated for taxing capital gains as part of her broader economic platform.

A Look Ahead

With midterm elections looming in the House and 33 Senate seats up for election, the impact of VP Harris’ nomination on Congressional races will be watched closely. As the first woman of color and the highest-ranking woman in US history to hold the office of Vice President, Harris’ nomination marks a pivotal moment in American politics. It may influence voter behavior, candidate strategies across the aisle, and the broader political landscape leading up to the November elections.

The Democratic National Convention (DNC) is scheduled to be held in Chicago, Illinois, from August 19 to August 22. However, due to upcoming state ballot deadlines which precede the convention date, a virtual roll call where delegates formally select Kamala Harris as the nominee will conclude by August 7. Harris is expected to choose her running mate in the coming days, as her campaign team has sent vetting materials to Arizona Sen. Mark Kelly, Michigan Gov. Gretchen Whitmer, Minnesota Gov. Tim Walz, North Carolina Gov. Roy Cooper, and Pennsylvania Gov. Josh Shapiro.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

Congress Introduces Promising Bipartisan Privacy Bill

U.S. Senator Maria Cantwell (D-WA) and U.S. Representative Cathy McMorris Rodgers (R-WA) have made a breakthrough by agreeing on a bipartisan data privacy legislation proposal. The legislation aims to address concerns related to consumer data collection by technology companies and empower individuals to have control over their personal information.

The proposed legislation aims to restrict the amount of data technology companies can gather from consumers. This step is particularly important given the large amount of data these technology companies possess. It would grant Americans the authority to prevent the sale of their personal information or request its deletion. This step gives individuals more control over their personal data. The Federal Trade Commission (FTC) and state attorneys general would be given significant authority to monitor and regulate matters related to consumer privacy. This measure will ensure that the government has a say in matters associated with consumer privacy. The bill includes robust enforcement measures, such as granting individuals the right to take legal action. This step is necessary to ensure that any violations of the legislation are dealt with effectively. While targeted advertising would not be prohibited, the proposed legislation would allow consumers to opt out of it. This step gives consumers more control over the ads they receive. The privacy violations listed in the legislation would also be applicable to telecommunications companies. This measure ensures that no company is exempt from consumer privacy laws. Annual assessments of algorithms would be conducted to ensure that they do not harm individuals, particularly young people. This is an important, step given the rise of technology and its impact on consumers, especially among younger generations.

The bipartisan proposal for data privacy legislation is a positive step forward in terms of consumer privacy in America. While there is still work to be done, it is essential that the government takes proactive steps to ensure that individuals have greater control over their personal data. This is a positive development for the tech industry and consumers alike.

However, as we reported on before, this is not the first time Congress has made strides towards comprehensive data privacy legislation,). Hopefully, this new bipartisan bill will enjoy more success than past efforts and bring the United States closer in line with international data privacy standards.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.

CNN, BREAKING NEWS: CNN Targeted In Massive CIPA Case Involving A NEW Theory Under Section 638.51!

CNN is now facing a massive CIPA class action for violating CIPA Section 638.51 by allegedly installing “Trackers” on its website. In  Lesh v. Cable News Network, Inc, filed in the Superior Court of the State of California by Bursor & Fisher, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of Section 638.51.

More on that in a bit…

As CIPAworld readers know, we predicted the 2023 privacy litigation trends for you.

We warned you of the risky CIPA Chat Box cases.

We broke the news on the evolution of CIPA Web Session recording cases.

We notified you of major CIPA class action lawsuits against some of the world’s largest brands facing millions of dollars in potential exposure.

Now – we are reporting on a lesser-known facet of CIPA – but one that might be even more dangerous for companies using new Internet technologies.

This new focus for plaintiff’s attorneys appears to rely on the theory that website analytic tools are “pen register” or “trap and trace” devices under CIPA §638.51. These allegations also come with a massive $5,000 per violation penalty.

First, let’s delve into the background.

The Evolution of California Invasion of Privacy Act:

We know the California Invasion of Privacy Act is this weird little statute that was enacted decades ago and was designed to prevent ease dropping and wiretapping because — of course back then law enforcements were listening into folks phone calls to find the communist.

638.51 in particular was originally enacted back in the 80s and traditionally, “pen-traps” were employed by law enforcement to record outgoing and/or incoming telephone numbers from a telephone line.

The last two years, plaintiffs have been using these decades-old statues against companies claiming that the use of internet technologies such as website chat boxes, web session recording tools, java scripts, pixels, cookies and other newfangled technologies constitute “wire tapping” or “eavesdropping” on website users.

And California courts who love to take old statutes and apply it to these new technologies – have basically said internet communications are protected from being ease dropped on.

Now California courts will have to address whether these new fangled technologies are also “pen-trap” “devices or processes” under 638.51. These new 638.51 cases involve technologies such as cookies, web beacons, java scripts, and pixels to obtain information about users and their devices as they browse websites and or mobile applications. The users are then analyzed by the website operator or a third party vendor to gather relevant information users’ online activities.

Section 638.51:

Section 638.51 prohibits the usage or installation of “pen registers” – a device or process that records or decodes dialing, routing, addressing, or signaling information (commonly known as DRAS) and “trap and trace” (pen-traps) – devices or processes traditionally used by law enforcement that allow one to record all numbers dialed on outgoing calls or numbers identifying incoming calls — without first obtaining a court order.

Unlike CIPA’s 631, which prohibits wiretapping — the real-time interception of the content of the communications without consent, CIPA 638.51 prohibits the collection of DRAS.

638.51 has limited exceptions including where a service provider’s customer consents to the device’s use or to protect the rights of a service provider’s property.

Breaking Down the Terminology:

The term “pen register” means a device or process that records or decodes DRAs “transmitted by an instrument or facility from which a wire or electronic communication is transmitted, but not the contents of a communication.” §638.50(b).

The term “trap and trace” focuses on incoming, rather than outgoing numbers, and means a “device or process that captures the incoming electronic or other impulses that identify the originating number or other dialing, routing, addressing, or signaling information reasonably likely to identify the source of a wire or electronic communication, but not the contents of a communication.” §638.50(c).

Lesh v. Cable News Network, Inc “CNN” and its precedent:

This new wave of CIPA litigation stems from a single recent decision, Greenley v. Kochava, where the CA court –allowed a “pen register” claim to move pass the motion to dismiss stage. In Kochava, plaintiff challenged the use of these new internet technologies and asserting that the defendant data broker’s software was able to collect a variety of data such as geolocation, search terms, purchase decisions, and spending habits. Applying the plain meaning to the word “process” the Kochava court concluded that “software that identifies consumers, gathers data, and correlates that data through unique ‘fingerprinting’ is a process that falls within CIPA’s pen register definition.”

The Kochava court noted that no other court had interpreted Section 638.51, and while pen registers were traditionally physical machines used by law enforcement to record outbound call from a telephone, “[t]oday pen registers take the form of software.” Accordingly the court held that the plaintiff adequately alleged that the software could collect DRAs and was a “pen register.”

Kochava paved the wave for 638.51 litigation – with hundreds of complaints filed since. The majority of these cases are being filed in Los Angeles Country Superior Court by the Pacific Trial Attorneys in Newport Beach.

In  Lesh v. Cable News Network, Inc, plaintiff accuses the multinational news network of installing 3 tracking software to invade users’ privacy and track their browsing habits in violation of CIPA Section 638.51(a) which proscribes any “person” from “install[ing] or us[ing] a pen register or a trap and trace device without first obtaining a court order.”

Plaintiff alleges CNN uses three “Trackers” (PubMatic, Magnite, and Aniview), on its website which constitute “pen registers.” The complaint alleges to make CNN’s website load on a user’s browser, the browser sends “HTTP request” or “GET” request to CNN’s servers where the data is stored. In response to the request, CNN’s service sends an “HTTP response” back to the browser with a set of instructions how to properly display the website – i.e. what images to load, what text should appear, or what music should play.

These instructions cause the Trackers to be installed on a user’s browsers which then cause the browser to send identifying information – including users’ IP addresses to the Trackers to analyze data, create and analyze the performance of marketing campaigns, and target specific users for advertisements. Accordingly the Trackers are “pen registers” – so the complaint alleges.

On this basis, the Plaintiff is asking the court for an order to certify the class, and statutory damages in addition to attorney fees. The alleged class is as follows:

“Pursuant to Cal. Code Civ. Proc. § 382, Plaintiff seeks to represent a class defined as all California residents who accessed the Website in California and had their IP address collected by the Trackers (the “Class”).

The following people are excluded from the Class: (i) any Judge presiding over this action and members of her or her family; (ii) Defendant, Defendant’s subsidiaries, parents, successors, predecessors, and any entity in which Defendant or their parents have a controlling interest (including current and former employees, officers, or directors); (iii) persons who properly execute and file a timely request for exclusion from the Class; (iv) persons whose claims in this matter have been finally adjudicated on the merits or otherwise released; (v) Plaintiff’s counsel and Defendant’s counsel; and (vi) the legal representatives, successors, and assigns of any such excluded persons.”

Under this expansive definition of “pen-register,” plaintiffs are alleging that almost any device that can track a user’s web session activity falls within the definition of a pen-register.

We’ll keep an eye out on this one – but until more helpful case law develops, the Kochava decision will keep open the floodgate of these new CIPA suits. Companies should keep in mind that unlike the other CIPA cases under Section 631 and 632.7, 638.51 allows for a cause of action even where no “contents” are being “recorded” – making 638.51 easier to allege.

Additionally, companies should be mindful of CIPA’s consent exceptions and ensure they are obtaining consent to any technologies that may trigger CIPA.

2023 Cybersecurity Year In Review

2023 was another busy year in the realm of data event and cybersecurity litigations, with several noteworthy developments in the realm of disputes and regulator activity. Privacy World has been tracking these developments throughout the year. Read on for key trends and what to expect going into the 2024.

Growth in Data Events Leads to Accompanying Increase in Claims

The number of reportable data events in the U.S. in 2023 reached an all-time high, surpassing the prior record set in 2021. At bottom, threat actors continued to target entities across industries, with litigation frequently following disclosure of data events. On the dispute front, 2023 saw several notable cybersecurity consumer class actions concerning the alleged unauthorized disclosure of sensitive personal information, including healthcare, genetic, and banking information. Large putative class actions in these areas included, among others, lawsuits against the hospital system HCA Healthcare (estimated 11 million individuals involved in the underlying data event), DNA testing provider 23andMe (estimated 6.9 million individuals involved in the underlying data event), and mortgage business Mr. Cooper (estimated 14.6 million individuals involved in the underlying data event).

JPML Creates Several Notable Cybersecurity MDLs

In 2023 the Judicial Panel on Multidistrict Litigation (“JPML”) transferred and centralized several data event and cybersecurity putative class actions. This was a departure from prior years in which the JPML often declined requests to consolidate and coordinate pretrial proceedings in the wake of a data event. By way of example, following the largest data breach of 2023—the MOVEit hack affecting at least 55 million people—the JPML ordered that dozens of class actions regarding MOVEit software be consolidated for pretrial proceedings in the District of Massachusetts. Other data event litigations similarly received the MDL treatment in 2023, including litigations against SamsungOverby-Seawell Company, and T‑Mobile.

Significant Class Certification Rulings

Speaking of the development of precedent, 2023 had two notable decisions addressing class certification. While they arose in the cybersecurity context, these cases have broader applicability in other putative class actions. Following a remand from the Fourth Circuit, a judge in Maryland (in a MDL) re-ordered the certification of eight classes of consumers affected by a data breach suffered by Mariott. See In Re: Marriott International, Inc., Customer Data Security Breach Litigation,No. 8:19-md-02879, 2023 WL 8247865 (D. Md. Nov. 29, 2023). As explained here on PW, the court held that a class action waiver provision in consumers’ contracts did not require decertification because (1) Marriott waived the provision by requesting consolidation of cases in an MDL outside of the contract’s chosen venue, (2) the class action waiver was unconscionable and unenforceable, and (3) contractual provisions cannot override a court’s authority to certify a class under Rule 23.

The second notable decision came out of the Eleventh Circuit, where the Court of Appeals vacated a district court’s certification of a nationwide class of restaurant customers in a data event litigation. See Green-Cooper v. Brinker Int’l, Inc., No. 21-13146, 73 F. 4th 883 (11th Cir. July 11, 2023). In a 2-1 decision, a majority of the Court held that only one of the three named plaintiffs had standing under Article III of the U.S. Constitution, and remanded to the district court to reassess whether the putative class satisfied procedural requirements for a class. The two plaintiffs without standing dined at one of the defendant’s restaurants either before or after the time period that the restaurant was impacted by the data event, which the Fourth Circuit held to mean that any injury the plaintiffs suffered could not be traced back to defendant.

Standing Challenges Persist for Plaintiffs in Data Event and Cybersecurity Litigations

Since the Supreme Court’s TransUnion decision in 2021, plaintiffs in data breach cases have continued to face challenges getting into or staying in federal court, and opinions like Brinker reiterate that Article III standing issues are relevant at every stage in litigation, including class certification. See, also, e.g.Holmes v. Elephant Ins. Co., No. 3:22-cv-00487, 2023 WL 4183380 (E.D. Va. June 26, 2023) (dismissing class action complaint alleging injuries from data breach for lack of standing). Looking ahead to 2024, it is possible that more data litigation plays out in state court rather than federal court—particularly in the Eleventh Circuit but also elsewhere—as a result.

Cases Continue to Reach Efficient Pre-Trial Resolution

Finally in the dispute realm, several large cybersecurity litigations reached pre-trial resolutions in 2023. The second-largest data event settlement ever—T-Mobile’s $350 million settlement fund with $150 million in data spend—received final approval from the trial court. And software company Blackbaud settled claims relating to a 2020 ransomware incident with 49 states Attorneys General and the District of Columbia to the tune of $49.5 million. Before the settlement, Blackbaud was hit earlier in the year with a $3 million fine from the Securities and Exchange Commission. The twin payouts by Blackbaud are cautionary reminders that litigation and regulatory enforcement on cyber incidents often go-hand-in-hand, with multifaceted risks in the wake of a data event.

FTC and Cybersecurity

Regulators were active on the cybersecurity front in 2023, as well. Following shortly after a policy statement by the Health and Human Resources Office of Civil Rights policy Bulletin on use of trackers in compliance with HIPAA, the FTC announced settlement of enforcement actions against GoodRxPremom, and BetterHelp for sharing health data via tracking technologies with third parties resulting in a breach of Personal Health Records under the Health Breach Notification Rule. The FTC also settled enforcement actions against Chegg and Drizly for inadequate cybersecurity practices which led to data breaches. In both cases, the FTC faulted the companies for failure to implement appropriate cybersecurity policies and procedures, access controls, and securely store access credentials for company databases (among other issues).

Notably, in Drizly matter, the FTC continued ta trend of holding corporate executives responsible individually for his failure to implement “or properly delegate responsibility to implement, reasonable information security practices.” Under the consent decree, Drizly’s CEO must implement a security program (either at Drizly or any company to which he might move that processes personal information of 25,000 or more individuals and where he is a majority owner, CEO, or other senior officer with information security responsibilities).

SEC’s Focus on Cyber Continues

The SEC was also active in cybersecurity. In addition to the regulatory enforcement action against Blackbaud mentioned above, the SEC initiated an enforcement action against a software company for a cybersecurity incident disclosed in 2020. In its complaint, the SEC alleged that the company “defrauded…investors and customers through misstatements, omissions, and schemes that concealed both the Company’s poor cybersecurity practices and its heightened—and increasing—cybersecurity risks” through its public statements regarding its cybersecurity practices and risks. Like the Drizly matter, the SEC charged a senior company executive individually—in this case, the company’s CISO—for concealing the cybersecurity deficiencies from investors. The matter is currently pending. These cases reinforce that regulators will continue to hold senior executives responsible for oversight and implementation of appropriate cybersecurity programs.

Notable Federal Regulatory Developments

Regulators were also active in issuing new regulations on the cybersecurity front in 2023. In addition to its cybersecurity regulatory enforcement actions, the FTC amended the GLBA Safeguards Rule. Under the amended Rule, non-bank financial institutions must provide notice to notify the FTC as soon as possible, and no later than 30 days after discovery, of any security breach involving the unencrypted information of 500 or more consumers.

Additionally, in March 2024, the SEC proposed revisions to Regulation S-P, Rule 10 and form SCIR, and Regulation SCI aimed at imposing new incident reporting and cybersecurity program requirements for various covered entities. You can read PW’s coverage of the proposed amendments here. In July, the SEC also finalized its long-awaited Cybersecurity Risk Management and Incident Disclosure Regulations. Under the final Regulations, public companies are obligated to report regarding material cybersecurity risks, cybersecurity risk management and governance, and board of directors’ oversight of cybersecurity risks in their annual 10-K reports. Additionally, covered entities are required to report material cybersecurity incidents within four business days of determining materiality. PW’s analysis of the final Regulations are here.

New State Cybersecurity Regulations

The New York Department of Financial Services also finalized amendments to its landmark Cybersecurity Regulations in 2023. In the amended Regulations, NYDFS creates a new category of companies subject to heightened cybersecurity standards: Class A Companies. These heightened cybersecurity standards would apply only to the largest financial institutions (i.e., entities with at least $20 million in gross annual revenues over the last 2 fiscal years, and either (1) more than 2,000 employees; or (2) over $1 billion in gross annual revenue over the last 2 fiscal years). The enhanced requirements include independent cybersecurity audits, enhanced privileged access management controls, and endpoint detection and response with centralized logging (unless otherwise approved in writing by the CISO). New cybersecurity requirements for other covered entities include annual review and approval of company cybersecurity policy by a senior officer or the senior governing body (i.e., board of directors), CISO reporting to the senior governing body, senior governing body oversight, and access controls and privilege management, among others. PW’s analysis of the amended NYDFS Cybersecurity Regulations is here.

On the state front, California Privacy Protection Agency issued draft cybersecurity assessment regulations as required by the CCPA. Under the draft regulations, if a business’s “processing of consumers’ personal information presents significant risk to consumers’ security”, that business must conduct a cybersecurity audit. If adopted as proposed, companies that process a (yet undetermined) threshold number of items of personal information, sensitive personal information, or information regarding consumers under 16, as well as companies that exceed a gross revenue threshold will be considered “high risk.” The draft regulations outline detailed criteria for evaluating businesses’ cybersecurity program and documenting the audit. The draft regulations anticipate that the audit results will be reported to the business’s board of directors or governing body and that a representative of that body will certify that the signatory has reviewed and understands the findings of the audit. If adopted, businesses will be obligated to certify compliance with the audit regulations to the CPPA. You can read PW’s analysis of the implications of the proposed regulations here.

Consistent with 2023 enforcement priorities, new regulations issued this year make clear that state and federal regulators are increasingly holding senior executives and boards of directors responsible for oversight of cybersecurity programs. With regulations explicitly requiring oversight of cybersecurity risk management, the trend toward holding individual executives responsible for egregious cybersecurity lapses is likely to continue into 2024 and beyond.

Looking Forward

2023 demonstrated “the more things change, the more they stay the same.” Cybersecurity litigation trends were a continuation the prior two years. Something to keep an eye on in 2024 remains the potential for threatened individual officer and director liability in the wake of a widespread cyberattack. While the majority of cybersecurity litigations filed continue to be brought on behalf of plaintiffs whose personal information was purportedly disclosed, shareholders and regulators will increasingly look to hold executives responsible for failing to adopt reasonable security measures to prevent cyberattacks in the first instance.

Needless to say, 2024 should be another interesting year on the cybersecurity front. This is particularly so for data event litigations and for data developments more broadly.

For more news on Data Event and Cybersecurity Litigations in 2023, visit the NLR Communications, Media & Internet section.

Exploring the Future of Information Governance: Key Predictions for 2024

Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Jim Merrifield, Robinson+Cole’s Director of Information Governance & Business Intake, contributed to this report.

Cybersecurity Awareness Dos and Donts Refresher

As we have adjusted to a combination of hybrid, in-person and remote work conditions, bad actors continue to exploit the vulnerabilities associated with our work and home environments. Below are a few tips to help employers and employees address the security threats and challenges of our new normal:

  • Monitoring and awareness of cybersecurity threats as well as risk mitigation;
  • Use of secure Wi-Fi networks, strong passwords, secure VPNs, network infrastructure devices and other remote working devices;
  • Use of company-issued or approved laptops and sandboxed virtual systems instead of personal computers and accounts, as well as careful handling of sensitive and confidential materials; and
  • Preparing to handle security incidents while remote.

Be on the lookout for phishing and other hacking attempts.

Be on high alert for cybersecurity attacks, as cybercriminals are always searching for security vulnerabilities to exploit. A malicious hacker could target employees working remotely by creating a fake coronavirus notice, phony request for charitable contributions or even go so far as impersonating someone from the company’s Information Technology (IT) department. Employers should educate employees on the red flags of phishing emails and continuously remind employees to remain vigilant of potential scams, exercise caution when handling emails and report any suspicious communications.

Maintain a secure Wi-Fi connection.

Information transmitted over public and unsecured networks (such as a free café, store or building Wi-Fi) can be viewed or accessed by others. Employers should configure VPN for telework and enable multi-factor authentication for remote access. To increase security at home, employers should advise employees to take additional precautions, such as using secure Wi-Fi settings and changing default Wi-Fi passwords.

Change and create strong passwords.

Passwords that use pet or children names, birthdays or any other information that can be found on social media can be easily guessed by hackers. Employers should require account and device passwords to be sufficiently long and complex and include capital and lower case letters, numbers and special characters. As an additional precaution, employees should consider changing their passwords before transitioning to remote work.

Update and secure devices. 

To reduce system flaws and vulnerabilities, employers should regularly update VPNs, network infrastructure devices and devices being used to for remote work environments, as well as advise employees to promptly accept updates to operating systems, software and applications on personal devices. When feasible, employers should consider implementing additional safeguards, such as keystroke encryption and mobile-device-management (MDM) on employee personal devices.

Use of personal devices and deletion of electronic files.

Home computers may not have deployed critical security updates, may not be password protected and may not have an encrypted hard drive. To the extent possible, employers should urge employees to use company-issued laptops or sandboxed virtual systems. Where this is not possible, employees should use secure personal computers and employers should advise employees to create a separate user account on personal computers designated for work purposes and to empty trash or recycle bins and download folders.

Prohibit use of personal email for work purposes.

To avoid unauthorized access, personal email accounts should not be used for work purposes. Employers should remind employees to avoid forwarding work emails to personal accounts and to promptly delete emails in personal accounts as they may contain sensitive information.

Secure collaboration tools.

Employees and teams working from home need to stay connected and often rely on instant-messaging and web-conferencing tools (e.g., Slack and Zoom). Employers should ensure company-provided collaboration tools, if any, are secure and should restrict employees from downloading any non-company approved tools. If new collaboration tools are required, IT personnel should review the settings of such tools (as they may not be secure or may record conversations by default), and employers should consider training employees on appropriate use of such tools.

Handle physical documents with care.

Remote work arrangements may require employees to take sensitive or confidential materials offsite that they would not otherwise. Employees should be advised to handle these documents with the appropriate levels of care and avoid printing sensitive or confidential materials on public printers. These documents should be securely shredded or returned to the office for proper disposal.

Develop clear guidelines and train employees on cyberhygiene.

To ensure employees are aware of remote work responsibilities and obligations, employers should prepare clear telework guidelines (and incorporate any standards required by applicable regulatory schemes) and post the guidelines on the organization’s intranet and/or circulate the guidelines to employees via email. A list of key company contacts, including Human Resources and IT security personnel, should be distributed to employees in the event of an actual or suspected security incident.

Prepare for remote activation of incident response and crisis management plans.

Employers should review existing incident response, crisis management and business continuity plans, as well as ensure relevant stakeholders are prepared for remote activation of these plans, such as having hard copies of relevant plans and contact information at home.

DO DON’T
 

  • DO create complex passphrases
  • DO change home Wi-Fi passwords
  • DO create a separate Wi-Fi network for guests
  • DO install anti-malware and anti-virus software for internet-enabled devices
  • DO keep software (including anti-virus/anti-malware software), web browsers, and operating systems up-to-date
  • DO delete files from download folders and trash bins
  • DO immediately report lost or stolen devices
  • DO log off accounts and close windows and browsers on shared devices
  • DO review mobile app settings on shared devices
  • DO handle physical documents with sensitive and/or confidential information in a secure manner

 

 

  • Do NOT use public or unsecure Wi-Fi networks without using VPN
  • Do NOT access or send confidential information over unsecured Wi-Fi networks
  • Do NOT leave electronic or paper documents out in the open
  • Do NOT allow family or friends to use company-provided devices
  • Do NOT leave devices logged-in
  • Do NOT select “remember me” on shared devices
  • Do NOT share passwords with family members
  • Do NOT use names or birthdays in passwords
  • Do NOT save work documents locally on shared devices
  • Do NOT store confidential information on portable storage devices, such as USB or hard drives

 

Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

Montana Passes 9th Comprehensive Consumer Privacy Law in the U.S.

On May 19, 2023, Montana’s Governor signed Senate Bill 384, the Consumer Data Privacy Act. Montana joins California, Colorado, Connecticut, Indiana, Iowa, Tennessee, Utah, and Virginia in enacting a comprehensive consumer privacy law. The law is scheduled to take effect on October 1, 2024.

When does the law apply?

The law applies to a person who conducts business in the state of Montana and:

  • Controls or processes the personal data of not less than 50,000 consumers (defined as Montana residents), excluding data controlled or processed solely to complete a payment transaction.
  • Controls and processes the personal data of not less than 25,000 consumers and derive more than 25% of gross revenue from the sale of personal data.

Hereafter these covered persons are referred to as controllers.

The following entities are exempt from coverage under the law:

  • Body, authority, board, bureau, commission, district, or agency of this state or any political subdivision of this state;
  • Nonprofit organization;
  • Institution of higher education;
  • National securities association that is registered under 15 U.S.C. 78o-3 of the federal Securities Exchange Act of 1934;
  • A financial institution or an affiliate of a financial institution governed by Title V of the Gramm- Leach-Bliley Act;
  • Covered entity or business associate as defined in the privacy regulations of the federal Health Insurance Portability and Accountability Act (HIPAA);

Who is protected by the law?

Under the law, a protected consumer is defined as an individual who resides in the state of Montana.

However, the term consumer does not include an individual acting in a commercial or employment context or as an employee, owner, director, officer, or contractor of a company partnership, sole proprietorship, nonprofit, or government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with the company, partnership, sole proprietorship, nonprofit, or government agency.

What data is protected by the law?

The statute protects personal data defined as information that is linked or reasonably linkable to an identified or identifiable individual.

There are several exemptions to protected personal data, including for data protected under HIPAA and other federal statutes.

What are the rights of consumers?

Under the new law, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data
  • Access Personal Data processed by a controller
  • Delete personal data
  • Obtain a copy of personal data previously provided to a controller.
  • Opt-out of the processing of the consumer’s personal data for the purpose of targeted advertising, sales of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects.

What obligations do businesses have?

The controller shall comply with requests by a consumer set forth in the statute without undue delay but no later than 45 days after receipt of the request.

If a controller declines to act regarding a consumer’s request, the business shall inform the consumer without undue delay, but no later than 45 days after receipt of the request, of the reason for declining.

The controller shall also conduct and document a data protection assessment for each of their processing activities that present a heightened risk of harm to a consumer.

How is the law enforced?

Under the statute, the state attorney general has exclusive authority to enforce violations of the statute. There is no private right of action under Montana’s statute.

Jackson Lewis P.C. © 2023

For more Privacy Legal News, click here to visit the National Law Review.