New Poll Underscores Growing Support for National Data Privacy Legislation

Over half of all Americans would support a federal data privacy law, according to a recent poll from Politico and Morning Consult. The poll found that 56 percent of registered voters would either strongly or somewhat support a proposal to “make it illegal for social media companies to use personal data to recommend content via algorithms.” Democrats were most likely to support the proposal at 62 percent, compared to 54 percent of Republicans and 50 percent of Independents. Still, the numbers may show that bipartisan action is possible.

The poll is indicative of American’s increasing data privacy awareness and concerns. Colorado, Virginia, and California all passed or updated data privacy laws within the last year, and nearly every state is considering similar legislation. Additionally, Congress held several high-profile hearings last year soliciting testimony from several tech industry leaders and whistleblower Frances Haugen. In the private sector, Meta CEO Mark Zuckerberg has come out in favor of a national data privacy standard similar to the EU’s General Data Protection Regulation (GDPR).

Politico and Morning Consult released the poll results days after Senator Ron Wyden (D-OR) accepted a 24,000-signature petition calling for Congress to pass a federal data protection law. Senator Wyden, who recently introduced his own data privacy proposal called the “Mind Your Own Business Act,” said it was “past time” for Congress to act.

He may be right: U.S./EU data flows have been on borrowed time since 2020. The GDPR prohibits data flows from the EU to countries with inadequate data protection laws, including the United States. The U.S. Privacy Shield regulations allowed the United States to circumvent the rule, but an EU court invalidated the agreement in 2020, and data flows between the US and the EU have been in legal limbo ever since. Eventually, Congress and the EU will need to address the situation and a federal data protection law would be a long-term solution.

This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law. Click here to read more about the Data Privacy and Cybersecurity practice at Robinson & Cole LLP.

For more data privacy and cybersecurity news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

BREAKING: Seventh Circuit Certifies BIPA Accrual Question to Illinois Supreme Court in White Castle

Yesterday the Seventh Circuit issued a much awaited ruling in the Cothron v. White Castle litigation, punting to the Illinois Supreme Court on the pivotal question of when a claim under the Illinois Biometric Privacy Act (“BIPA”) accrues.  No. 20-3202 (7th Cir.).  Read on to learn more and what it may mean for other biometric and data privacy litigations.

First, a brief recap of the facts of the dispute.  After Plaintiff started working at a White Castle in Illinois in 2004, White Castle began using an optional, consent-based finger-scan system for employees to sign documents and access their paystubs and computers.  Plaintiff consented in 2007 to the collection of her biometric data and then 11 years later—in 2018—filed suit against White Castle for purported violation of BIPA.

Plaintiff alleged that White Castle did not obtain consent to collect or disclose her fingerprints at the first instance the collection occurred under BIPA because BIPA did not exist in 2007.  Plaintiff asserted that she was “required” to scan her finger each time she accessed her work computer and weekly paystubs with White Castle and that her prior consent to the collection of biometric data did not satisfy BIPA’s requirements.  According to Plaintiff, White Castle violated BIPA Sections 15(b) and 15(d) by collecting, then “systematically and automatically” disclosing her biometric information without adhering to BIPA’s requirements (she claimed she did not consent under BIPA to the collection of her information until 2018). She sought statutory damages for “each” violation on behalf of herself and a putative class.

White Castle before the district court had moved to dismiss the Complaint and for judgment on the pleadings—both of which motions were denied.  The district court sided with Plaintiff, holding that “[o]n the facts set forth in the pleadings, White Castle violated Section 15(b) when it first scanned [Plaintiff’s] fingerprint and violated Section 15(d) when it first disclosed her biometric information to a third party.”  The district court also held that under Section 20 of BIPA, Plaintiff could recover for “each violation.”  The court rejected White Castle’s argument that this was an absurd interpretation of the statute not in keeping with legislative intent, commenting that “[i]f the Illinois legislature agrees that this reading of BIPA is absurd, it is of course free to modify the statue” but “it is not the role of a court—particularly a federal court—to rewrite a state statute to avoid a construction that may penalize violations severely.”

White Castle filed an appeal of the district court’s ruling with the Seventh Circuit.  As presented by White Castle, the issue before the Seventh Circuit was “[w]hether, when conduct that allegedly violates BIPA is repeated, that conduct gives rise to a single claim under Sections 15(b) and 15(d) of BIPA, or multiple claims.”

In ruling yesterday this issue was appropriate for the Illinois Supreme Court, the Seventh Circuit held that “[w]hether a claim accrues only once or repeatedly is an important and recurring question of Illinois law implicating state accrual principles as applied to this novel state statute.  It requires authoritative guidance that only the state’s highest court can provide.”  Here, the accrual issue is dispositive for purposes of Plaintiffs’ BIPA claim.  As the Seventh Circuit recognized, “[t]he timeliness of the suit depends on whether a claim under the Act accrued each time [Plaintiff] scanned her fingerprint to access a work computer or just the first time.”

Interestingly, the Seventh Circuit drew a comparison to data privacy litigations outside the context of BIPA, stating that the parties’ “disagreement, framed differently, is whether the Act should be treated like a junk-fax statute for which a claim accrues for each unsolicited fax, [], or instead like certain privacy and reputational torts that accrue only at the initial publication of defamatory material.”

Several BIPA litigations have been stayed pending a ruling from the Seventh Circuit in White Castle and these cases will remain on pause going into 2022 pending a ruling from the Illinois Supreme Court.  While some had hoped for clarity on this area of BIPA jurisprudence by the end of the year, the Seventh Circuit’s ruling means that this litigation will remain a must-watch privacy case going forward.

Article By Kristin L. Bryan of Squire Patton Boggs (US) LLP

For more data privacy and cybersecurity legal news, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP

Patch Up – Log4j and How to Avoid a Cybercrime Christmas

A vulnerability so dangerous that Cybersecurity and Infrastructure (CISA) Director Jen Easterly called it “one of the most serious [she’s] seen in [her] entire career, if not the most serious” arrived just in time for the holidays. On December 10, 2021, CISA and the director of cybersecurity at the National Security Agency (NSA) began alerting the public of a critical vulnerability within the Apache Log4j Java logging framework. Civilian government agencies have been instructed to mitigate against the vulnerability by Christmas Eve, and companies should follow suit.

The Log4j vulnerability allows threat actors to remotely execute code both on-premises and within cloud-based application servers, thereby obtaining control of the impacted servers. CISA expects the vulnerability to affect hundreds of millions of devices. This is a widespread critical vulnerability and companies should quickly assess whether, and to what extent, they or their service providers are using Log4j.

Immediate Recommendations

  • Immediately upgrade all versions of Apache Log4j to 2.15.0.
  • Ask your service providers whether their products or environment use Log4j, and if so, whether they have patched to the latest version. Helpfully, CISA sponsors a community-sourced GitHub repository with a list of software related to the vulnerability as a reference guide.
  • Confirm your security operations are monitoring internet-facing systems for indicators of compromise.
  • Review your incident response plan and ensure all response team information is up to date.
  • If your company is involved in an acquisition, discuss the security steps taken within the target company to address the Log4j vulnerability.

The versatility of this vulnerability has already attracted the attention of malicious nation-state actors. For example, government-affiliated cybercriminals in Iran and China have a “wish list” (no holiday pun intended) of entities that they are aggressively targeting with the Log4j vulnerability. Due to this malicious nation-state activity, if your company experiences a ransomware attack related to the Log4j vulnerability, it is particularly important to pay attention to potential sanctions-related issues.

Companies with additional questions about the Log4j vulnerability and its potential impact on technical threats and potential regulatory scrutiny or commercial liability are encouraged to contact counsel.

© 2021 Bracewell LLP

In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues

The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.”  The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.

Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?

As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.

In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.

In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out.  Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.

At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world.  To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt.  Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.

Contemplating the metaverse, these are just a few of the legal issues that come to mind:

  • Personal Data, Privacy and Cybersecurity – Privacy and data security lawyers are already challenged with addressing the global concerns presented by varying international approaches to privacy and growing threats to data security. If the metaverse fulfills the hype and develops into a 3D web-based hub for our day-to-day lives, the volume of data that will be collected will be exponentially greater than the reams of data already collected, and the threats to that data will expand as well. Questions to consider will include:
    • Data and privacy – What’s collected? How sensitive is it? Who owns or controls it? The sharing of data will be the cornerstone of a seamless, interoperable environment where users and their digital personas and assets will be usable and tradeable across the different arenas of the metaverse.  How will the collection, sharing and use of such data be regulated?  What laws will govern the collection of data across the metaverse? The laws of a particular state?  Applicable federal privacy laws? The GDPR or other international regulations? Will there be a single overarching “privacy policy” governing the metaverse under a user and merchant agreement, or will there be varying policies depending on which realm of the metaverse you are in? Could some developers create a more “privacy-focused” experience or would the personal data of avatars necessarily flow freely in every realm? How will children’s privacy be handled and will there be “roped off,” adults-only spaces that require further authentication to enter? Will the concepts that we talk about today – “personal information” or “personally identifiable information” – carry over to a world where the scope of available information expands exponentially as activities are tracked across the metaverse?
    • Cybersecurity: How will cybersecurity be managed in the metaverse? What requirements will apply with respect to keeping data secure? How will regulation or site policies evolve to address deep fakes, avatar impersonation, trolling, stolen biometric data, digital wallet hacks and all of the other cyberthreats that we already face today and are likely to be exacerbated in the metaverse? What laws will apply and how will the various players collaborate in addressing this issue?
  • Technology Infrastructure: The metaverse will be a robust computing-intensive experience, highlighting the importance of strong contractual agreements concerning cloud computing, IoT, web hosting, and APIs, as well as software licenses and hardware agreements, and technology service agreements with developers, providers and platform operators involved in the metaverse stack. Performance commitments and service levels will take on heightened importance in light of the real-time interactions that users will expect. What is a meaningful remedy for a service level failure when the metaverse (or a part of the metaverse) freezes? A credit or other traditional remedy?  Lawyers and technologists will have to think creatively to find appropriate and practical approaches to this issue.  And while SaaS and other “as a service” arrangements will grow in importance, perhaps the entire process will spawn MaaS, or “Metaverse as a Service.”
  • Open Source – Open source, already ubiquitous, promises to play a huge role in metaverse development by allowing developers to improve on what has come before. Whether or not the obligations of common open source licenses will be triggered will depend on the technical details of implementation. It is also possible that new open source licenses will be created to contemplate development for the metaverse.
  • Quantum Computing – Quantum computing has dramatically increased the capabilities of computers and is likely to continue to do over the coming years. It will certainly be one of the technologies deployed to provide the computing speed to allow the metaverse to function. However, with the awesome power of quantum computing comes threats to certain legacy protections we use today. Passwords and traditional security protocols may be meaningless (requiring the development of post-quantum cryptography that is secure against both quantum and traditional computers). With raw, unchecked quantum computing power, the metaverse may be subject to manipulation and misuse. Regulation of quantum computing, as applied to the metaverse and elsewhere, may be needed.
  • Antitrust: Collaboration is a key to the success of the metaverse, as it is, by definition, a multi-tenant environment. Of course collaboration amongst competitors may invoke antitrust concerns. Also, to the extent that larger technology companies may be perceived as leveraging their position to assert unfair control in any virtual world, there may be additional concerns.
  • Intellectual Property Issues: A host of IP issues will certainly arise, including infringement, licensing (and breaches thereof), IP protection and anti-piracy efforts, patent issues, joint ownership concerns, safe harbors, potential formation of patent cross-licensing organizations (which also may invoke antitrust concerns), trademark and advertising issues, and entertaining new brand licensing opportunities. The scope of content and technology licenses will have to be delicately negotiated with forethought to the potential breadth of the metaverse (e.g., it’s easy to limit a licensee’s rights based on territory, for example, but what about for a virtual world with no borders or some borders that haven’t been drawn yet?). Rightsholders must also determine their particular tolerance level for unauthorized digital goods or creations. One can envision a need for a DMCA-like safe harbor and takedown process for the metaverse. Also, akin to the litigation that sprouted from the use of athletes’ or celebrities’ likenesses (and their tattoos) in videogames, it’s likely that IP issues and rights of publicity disputes will go way up as people’s virtual avatars take on commercial value in ways that their real human selves never did.
  • Content Moderation. Section 230 of the Communications Decency Act (CDA) has been the target of bipartisan criticism for several years now, yet it remains in effect despite its application in some distasteful ways. How will the CDA be applied to the metaverse, where the exchange of third party content is likely to be even more robust than what we see today on social media?  How will “bad actors” be treated, and what does an account termination look like in the metaverse? Much like the legal issues surrounding offensive content present on today’s social media platforms, and barring a change in the law, the same kinds of issues surrounding user-generated content will persist and the same defenses under Section 230 of the Communications Decency Act will be raised.
  • Blockchain, DAOs, Smart Contract and Digital Assets: Since the metaverse is planned as a single forum with disparate operators and users, the use of a blockchain (or blockchains) would seem to be one solution to act as a trusted, immutable ledger of virtual goods, in-world currencies and identity authentication, particularly when interactions may be somewhat anonymous or between individuals who may or may not trust each other and in the absence of a centralized clearinghouse or administrator for transactions. The use of smart contracts may be pervasive in the metaverse.  Investors or developers may also decide that DAOs (decentralized autonomous organizations) can be useful to crowdsource and fund opportunities within that environment as well.  Overall, a decentralized metaverse with its own discrete economy would feature the creation, sale and holding of sovereign digital assets (and their free use, display and exchange using blockchain-based payment networks within the metaverse). This would presumably give NFTs a role beyond mere digital collectibles and investment opportunities as well as a role for other forms of digital currency (e.g., cryptocurrency, utility tokens, stablecoins, e-money, virtual “in game” money as found in some videogames, or a system of micropayments for virtual goods, services or experiences).  How else will our avatars be able to build a new virtual wardrobe for what is to come?

With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse?  Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.

Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators.  Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.

Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by  Jonathan Mollod.

For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.

© 2021 Proskauer Rose LLP.

Privacy Tip #309 – Women Poised to Fill Gap of Cybersecurity Talent

I have been advocating for gender equality in Cybersecurity for years [related podcast and post].

The statistics on the participation of women in the field of cybersecurity continue to be bleak, despite significant outreach efforts, including “Girls Who Code” and programs to encourage girls to explore STEM (Science, Technology, Engineering and Mathematics) subjects.

Women are just now rising to positions from which they can help other women break into the field, land high-paying jobs, and combat the dearth of talent in technology. Judy Dinn, the new Chief Information Officer of TD Bank NA, is doing just that. One of her priorities is to encourage women to pursue tech careers. She recently told the Wall Street Journal that she “really, really always wants to make sure that female representation—whether they’re in grade school, high school, universities—that that funnel is always full.”

The Wall Street Journal article states that a study by AnitaB.org found that “women made up about 29% of the U.S. tech workforce in 2020.”  It is well known that companies are fighting for tech and cybersecurity talent and that there are many more open positions than talent to fill them. The tech and cybersecurity fields are growing with unlimited possibilities.

This is where women should step in. With increased support, and prioritized recruiting efforts that encourage women to enter fields focused on technology, we can tap more talent and begin to fill the gap of cybersecurity talent in the U.S.

Article By Linn F. Freedman of Robinson & Cole LLP

For more privacy and cybersecurity legal news, click here to visit the National Law Review.

Copyright © 2021 Robinson & Cole LLP. All rights reserved.

Continuing Effort to Protect National Security Data and Networks

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Evolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

© 2021 Bradley Arant Boult Cummings LLP

For more articles about cybersecurity, visit the NLR Cybersecurity, Media & FCC section.

Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels

On Sunday, October 3rd, Facebook whistleblower Frances Haugen publicly revealed her identity on the CBS television show 60 Minutes. Formerly a member of Facebook’s civic misinformation team, she previously reported them to the Securities and Exchange Commission (SEC) for a variety of concerning business practices, including lying to investors and amplifying the January 6th Capitol Hill attack via Facebook’s platform.

Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.

What are the Legal Implications of Breaking a Non-Disclosure Agreement?

After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th.  By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.

“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company.  Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.

Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.

Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.

According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.

“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”

Legal Implications for Facebook & Whistleblowers

Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.

Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.

“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.

As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.

During Haugen’s testimony during  the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.

“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.

This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.

Ransom Demands: To Pay or Not to Pay?

As the threat of ransomware attacks against companies has skyrocketed, so has the burden on companies forced to decide whether to pay cybercriminals a ransom demand. Corporate management increasingly is faced with balancing myriad legal and business factors in making real-time, high-stakes “bet the company” decisions with little or no precedent to follow. In a recent advisory, the U.S. Department of the Treasury (Treasury) has once again discouraged companies from making ransom payments or risk potential sanctions.

OFAC Ransom Advisory

On September 21, 2021, the Treasury’s Office of Foreign Assets Control (OFAC) issued an Advisory that updates and supersedes OFAC’s Advisory on Potential Sanctions Risks for Facilitating Ransomware Payments, issued on October 1, 2020. This updated OFAC Advisory follows on the heels of the Biden Administration’s heightened interest in combating the growing risk and reality of cyber threats that may adversely impact national security and the economy.

According to Federal Bureau of Investigation (FBI) statistics from 2019 to 2020 on ransomware attacks, there was a 21 percent increase in reported ransomware attacks and a 225 percent increase in associated losses. All organizations across all industry sectors in the private and public arenas are potential targets of such attacks. As noted by OFAC, cybercriminals often target particularly vulnerable entities, such as schools and hospitals, among others.

While some cybercriminals are linked to foreign state actors primarily motivated by political interests, many threat actors are simply in it “for the money.” Every day cybercriminals launch ransomware attacks to wreak havoc on vulnerable organizations, disrupting their business operations by encrypting and potentially stealing their data. These cybercriminals often demand ransom payments in the millions of dollars in exchange for a “decryptor” key to unlock encrypted files and/or a “promise” not to use or publish stolen data on the Dark Web.

The recent OFAC Advisory states in no uncertain terms that the “U.S. government strongly discourages all private companies and citizens from paying ransom or extortion demands.” OFAC notes that such ransomware payments could be “used to fund activities adverse to the national security and foreign policy objectives of the United States.” The Advisory further states that ransom payments may perpetuate future cyber-attacks by incentivizing cybercriminals. In addition, OFAC cautions that in exchange for payments to cybercriminals “there is no guarantee that companies will regain access to their data or be free from further attacks.”

The OFAC Advisory also underscores the potential risk of violating sanctions associated with ransom payments by organizations. As a reminder, various U.S. federal laws, including the International Emergency Economic Powers Act and the Trading with the Enemy Act, prohibit U.S. persons or entities from engaging in financial or other transactions with certain blacklisted individuals, organizations or countries – including those listed on OFAC’s Specially Designated Nationals and Blacked Persons List or countries subject to embargoes (such as Cuba, the Crimea region of the Ukraine, North Korea and Syria).

Penalties & Mitigating Factors

If a ransom payment is deemed to have been made to a cybercriminal with a nexus to a blacklisted organization or country, OFAC may impose civil monetary penalties for violations of sanctions based on strict liability, even if a person or organization did not know it was engaging in a prohibited transaction.

However, OFAC will consider various mitigating factors in deciding whether to impose penalties against organizations for sanctioned transactions, including if the organizations adopted enhanced cybersecurity practices to reduce the risk of cyber-attacks, or promptly reported ransomware attacks to law enforcement and regulatory authorities (including the FBI, U.S. Secret Service and/or Treasury’s Office of Cybersecurity and Critical Infrastructure Protection).

“OFAC also will consider a company’s full and ongoing cooperation with law enforcement both during and after a ransomware attack” as a “significant” mitigating factor. In encouraging organizations to self-report ransomware attacks to federal authorities, OFAC notes that information shared with law enforcement may aid in tracking cybercriminals and disrupting or preventing future attacks.

Conclusion

In short, payment of a ransom is not illegal per se, so long as the transaction does not involve a sanctioned party on OFAC’s blacklist. Moreover, the recent ransomware Advisory “is explanatory only and does not have the force of law.” Nonetheless, organizations should consider carefully OFAC’s advice and guidance in deciding whether to pay a ransom demand.

In addition to the OFAC Advisory, management should consider the following:

  • Ability to restore systems from viable (unencrypted) backups

  • Marginal time savings in restoring systems with a decryptor versus backups

  • Preservation of infected systems in order to conduct a forensics investigation

  • Ability to determine whether data was accessed or exfiltrated (stolen)

  • Reputational harm if data is published by the threat actor

  • Likelihood that the organization will be legally required to notify individuals of the attack regardless of whether their data is published on the Dark Web.

Should an organization decide it has no choice other than to make a ransom payment, it should facilitate the transaction through a reputable company that first performs and documents an OFAC sanctions check.

© 2021 Wilson Elser

For more articles about ransomware attacks, visit the NLR Cybersecurity, Media & FCC section.

Privilege Dwindles for Data Breach Reports

Data privacy lawyers and cyber security incident response professionals are losing sleep over the growing number of federal courts ordering disclosure of post-data breach forensic reports.  Following the decisions in Capital One and Clark Hill, another district court has recently ordered the defendant in a data breach litigation to turn over the forensic report it believed was protected under the attorney-client privilege and work product doctrines. These three decisions help underscore that maintaining privilege over forensic reports may come down to the thinnest of margins—something organizations should keep in mind given the ever-increasing risk of litigation that can follow a cybersecurity incident.

In May 2019, convenience store and gas station chain Rutter’s received two alerts signaling a possible breach of their internal systems. The same day, Rutter’s hired outside counsel to advise on potential breach notification obligations. Outside counsel immediately hired a forensic investigator to perform an analysis to determine the character and scope of the incident. Once litigation ensued, Rutter’s withheld the forensic report from production on the basis of the attorney-client privilege and work product doctrines. Rutter’s argued that both itself and outside counsel understood the report to be privileged because it was made in anticipation of litigation. The Court rejected this notion.

With respect to the work product doctrine, the Court stated that the doctrine only applies where identifiable or impending litigation is the “primary motivating purpose” of creating the document. The Court found that the forensic report, in this case, was not prepared for the prospect of litigation. The Court relied on the forensic investigator’s statement of work which stated that the purpose of the investigation was to “determine whether unauthorized activity . . . resulted in the compromise of sensitive data.” The Court decided that because Rutter’s did not know whether a breach had even occurred when the forensic investigator was engaged, it could not have unilaterally believed that litigation would result.

The Court was also unpersuaded by the attorney-client privilege argument. Because the forensic report only discussed facts and did not involve “opinions and tactics,” the Court held that the report and related communications were not protected by the attorney-client privilege. The Court emphasized that the attorney-client privilege does not protect communications of fact, nor communications merely because a legal issue can be identified.

The Rutter’s decision comes on the heels of the Capital One and Clark Hill rulings, which both held that the defendants failed to show that the forensic reports were prepared solely in anticipation of litigation. In Capital One, the company hired outside counsel to manage the cybersecurity vendor’s investigation after the breach, however, the company already had a longstanding relationship and pre-existing agreement with the vendor. The Court found that the vendor’s services and the terms of its new agreement were essentially the same both before and after the outside counsel’s involvement. The Court also relied on the fact that the forensic report was eventually shared with Capital One’s internal response team, demonstrating that the report was created for various business purposes.

In response to the data breach in the Clark Hill case, the company hired a vendor to investigate and remediate the systems after the attack. The company also hired outside counsel, who in turn hired a second cybersecurity vendor to assist with litigation stemming from the attack. During the litigation, the company refused to turn over the forensic report prepared by the outside counsel’s vendor. The Court rejected this “two-track” approach finding that the outside counsel’s vendor report has not been prepared exclusively for use in preparation for litigation. Like in Capital One, the Court found, among other things, that the forensic report was shared not only with inside and outside counsel, but also with employees inside the company, IT, and the FBI.

As these cases demonstrate, the legal landscape around responding to security incidents has become filled with traps for the unwary.  A coordinated response led by outside counsel is key to mitigating a data breach and ensuring the lines are not blurred between “ordinary course of business” factual reports and incident reports that are prepared for litigation purposes.

© 2021 Bracewell LLP

Fore more articles on cybersecurity, visit the NLR Communications, Media, Internet, and Privacy Law News section.

Ransomware Payments Can Lead to Sanctions and Reporting Obligations for Financial Institutions

With cybercrime on the rise, two U.S. Treasury Department components, the Office of Foreign Assets Control (“OFAC”) and the Financial Crimes Enforcement Network (“FinCEN”), issued advisories on one of the most insidious forms of cyberattack – ransomware.

Ransomware is a form of malicious software designed to block access to a system or data.  The targets of ransomware attacks are required to pay a ransom to regain access to their information or system, or to prevent the publication of their sensitive information.  Ransomware attackers usually demand payment in the form of convertible virtual currency (“CVC”), which can be more difficult to trace.  Although ransomware attacks were already on the rise (there was a 37% annual increase in reported cases and a 147% increase in associated losses from 2018 to 2019), the COVID19 pandemic has exacerbated the problem, as cyber actors target online systems that U.S. persons rely on to continue conducting business.

OFAC

The OFAC advisory focuses on the potential sanctions risks for those companies and financial institutions that are involved in ransomware payments to bad actors, including ransomware victims and those acting on their behalf, such as “financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response.”  OFAC stresses that these payments may violate US sanctions laws or OFAC regulations, and encourage future attacks.

OFAC maintains a consolidated list of sanctioned persons, which includes numerous malicious cyber actors and the digital currency addresses connected to them.[1]  Any payment to those organizations or their digital currency wallets or addresses, including the payment of a ransom itself, is a violation of economic sanctions laws regardless of whether the parties involved in the payment knew or had reason to know that the transaction involved a sanctioned party.  The advisory states that “OFAC has imposed, and will continue to impose, sanctions on these actors and others who materially assist, sponsor, or provide financial, material, or technological support for these activities.”

In addition to violating sanctions laws, OFAC warned that ransomware payments with a sanctions nexus threaten national security interests.  These payments enable criminals to profit and advance their illicit aims, including funding activities adverse to U.S. national security and foreign policy objectives.  Ransomware payments also embolden cyber criminals and provide no guarantee that the victim will regain access to their stolen data.

Any payment to those organizations or their digital currency wallets or addresses, including the payment of a ransom itself, is a violation of economic sanctions laws regardless of whether the parties involved in the payment knew or had reason to know that the transaction involved a sanctioned party.

OFAC encourages financial institutions to implement a risk-based compliance program to mitigate exposure to potential sanctions violations.  Accordingly, these sanctions compliance programs should account for the risk that a ransomware payment may involve a Specially Designated National, blocked person, or embargoed jurisdiction.  OFAC encouraged victims of ransomware attacks to contact law enforcement immediately, and listed the contact information for relevant government agencies.  OFAC wrote that it considers the “self-initiated, timely, and complete report of a ransomware attack to law enforcement to be a significant mitigating factor in determining an appropriate enforcement outcome if the situation is later determined to have a sanctions nexus.”  OFAC will also consider a company’s cooperation efforts both during and after the ransomware attack when evaluating a possible outcome.

Such cooperation may also be a “significant mitigating factor” in determining whether and to what extent enforcement is necessary.

FinCEN

FinCEN’s advisory also encourages entities that process payments potentially related to ransomware to report to and cooperate with law enforcement.  The FinCEN advisory arms these institutions with information about the role of financial intermediaries in payments, ransomware trends and typologies, related financial red flags, and effective reporting and information sharing related to ransomware attacks.

According to FinCEN, ransomware attacks are growing in size, scope, and sophistication.  The attacks have increasingly targeted larger enterprises for bigger payouts, and cybercriminals are sharing resources to increase the effectiveness of their attacks.  The demand for payment in anonymity-enhanced cryptocurrencies has also been on the rise.

FinCEN touted “[p]roactive prevention through effective cyber hygiene, cybersecurity controls, and business continuity resiliency” as the best ransomware defense.  The advisory lists numerous red flags designed to assist financial institutions in detecting, preventing, and ultimately reporting suspicious transactions associated with ransomware payments.  These red flags include, among others: (1) IT activity that shows the existence of ransomware software, including system log files, network traffic, and file information; (2) a customer’s CVC address that appears on open sources or is linked to past ransomware attacks; (3) transactions that occur between a high-risk organization and digital forensics and incident response companies or cyber insurance companies; and (4) customers that request payment in CVC, but show limited knowledge about the form of currency.

Finally, FinCEN reminded financial institutions about their obligations under the Bank Secrecy Act to report suspicious activity, including ransomware payments.  A financial institution is required to file a suspicious activity report (“SAR”) with FinCEN if it knows, suspects, or has reason to suspect that the attempted or completed transaction involves $5,000 or more derived from illegal activity.  “Reportable activity can involve transactions . . . related to criminal activity like extortion and unauthorized electronic intrusions,” the advisory says.  Given this, suspected ransomware payments and attempted payments should be reported to FinCEN in SARs.  The advisory provides information on how financial institutions and others should report and share the details related to ransomware attacks to increase the utility and effectiveness of the SARs.  For example, those filing ransomware-related SARs should provide all pertinent available information.  In keeping with FinCEN’s previous guidance on SAR filings relating to cyber-enabled crime, FinCEN expects SARs to include detailed cyber indicators.  Information, including “relevant email addresses, Internet Protocol (IP) addresses with their respective timestamps, virtual currency wallet addresses, mobile device information (such as device International Mobile Equipment Identity (IMEI) numbers), malware hashes, malicious domains, and descriptions and timing of suspicious electronic communications,” will assist FinCEN in protecting the U.S. financial system from ransomware threats.

[1] https://home.treasury.gov/news/press-releases/sm556


© Copyright 2020 Squire Patton Boggs (US) LLP
For  more articles on cybersecurity, visit the National Law Review Communications, Media & Internet section.