Two Recent Developments Promise to Shed Light on Accrual of BIPA Claims

In the aftermath of two recent appellate court decisions addressing when claims under the Illinois Biometric Information Privacy Act (“BIPA” or the “Act”) (740 ILCS 14/1 et seq.) accrue, it appears likely that the Illinois Supreme Court will need to provide clarity on this critical question. First, the Appellate Court of Illinois, First District, found in Watson v. Legacy Healthcare Financial Services, LLC, et al.  that claims under sections 15(a) and (b) of the Act accrue with each and every capture and use of a plaintiff’s biometric identifier or information. Second, in Cothron v. White Castle System, Inc. the Seventh Circuit Court of Appeals declined to directly address the issue of when a claim under BIPA accrues, and instead has certified the question for review by the Illinois Supreme Court. While the holding in Watson provides some clarity as to when certain BIPA claims accrue, it leaves open critical questions regarding how to calculate: (i) the number of BIPA violations; and (ii) monetary damages under the Act.

The Watson v. Legacy Healthcare Financial Services, LLC, et al. Decision

Plaintiff Brandon Watson sued Legacy Healthcare Financial Services, LLC, Lincoln Park Skilled Nursing Facility, LLC, and South Loop Skilled Nursing Facility, LLC (collectively, the “Defendants”) in March 2019, alleging that the Defendants violated BIPA by scanning the fingers or hands of their respective employees, including plaintiff, for timekeeping purposes. Plaintiff alleged that the scanning violated sections 15(a) and (b) of the Act, which place both restrictions and affirmative obligations on private entities related to biometric identifiers (such as fingerprints, voiceprints, retinal scans and facial geometry) and biometric information (e.g., information based on biometric identifiers to the extent used to identify an individual):

  • Private entities in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for destroying the information.  740 ILCS 14/15(a).
  • Private entities which collect, capture, purchase, receive or otherwise obtain biometrics must first inform the subject of that fact in writing, as well as the specific purpose and length of time for which the information will be retained, and must obtain a written release executed by the subject.  740 ILCS 14/15(b).

Plaintiff alleged that he began working for at least one of the Defendants in December 2012. Because the Act contains no provision as to when claims accrue or the applicable limitations period, Defendants moved to dismiss, arguing that Plaintiff’s claims accrued on the first day the Defendants allegedly collected his biometric information and Plaintiff’s claims were thus time-barred. In response, Plaintiff argued that his suit was not time-barred because his claims accrued with each alleged capture of his biometric information that Defendants obtained without providing notice and obtaining consent. The trial court granted the Defendants’ motion to dismiss, finding  that Plaintiff’s claims accrued with the initial scan of his finger or hand  in December 2012. Thereafter, the trial court granted Plaintiff’s Rule 304(a) motion for an interlocutory appeal.

The Appellate Court reversed and remanded, finding that a claim under the Act accrues after “each and every capture and use of plaintiff’s fingerprint or hand scan.” In reaching this result the Appellate Court analyzed the plain language of the Act and the legislative history of the Act, and accepted as true that the Defendants captured Plaintiff’s biometric information twice per day when he clocked in and out of work.

The Cothron v. White Castle System, Inc. Decision

Plaintiff Latrina Cothron sued White Castle System, Inc. (“White Castle”) alleging that White Castle violated BIPA when it required plaintiff to scan her finger in order to access work computers. Moreover, plaintiff alleged that White Castle disclosed the scans of her fingers to its third-party vendor as part of process to authenticate the finger scan and ultimately grant access to the work computers. Based on these allegations, plaintiff asserted claims under sections 15(b) and (d) of the Act. In addition to the obligations of section 15(b), outlined above, section 15(d) prohibits a private entity from disclosing, redisclosing or otherwise disseminating biometric information without consent.  740 ILCS 14/15(d).

White Castle moved for judgment on the pleadings, arguing that the suit was untimely since plaintiff’s claims accrued in 2008 when BIPA was enacted. The trial court denied White Castle’s motion, but certified its order for immediate appeal to the Seventh Circuit. In turn, the Seventh Circuit examined the arguments of both parties and ultimately concluded that the question of when a claim accrues under BIPA is a novel question which has not yet been addressed by the Illinois Supreme Court. As a result, the Seventh Circuit stayed proceedings in the Cothron matter and certified the question of when claims accrue under BIPA to the Illinois Supreme Court.

The Rulings’ Impact on Your Business

It is likely that it will take a ruling from the Illinois Supreme Court to provide further clarity on when claims under the Act accrue. In the interim, the Watson decision will obviously impact early BIPA case evaluations. It also, however, raises at least two unrelated issues that will likely be the subject of debate and litigation going forward.

First, Watson was based on the allegations in the complaint, without the benefit of discovery and additional information regarding the operation of the finger/hand scanning device(s) utilized by the Defendants.  Key to the decision is the Watson court’s conclusion that every use of the scanning device(s) results in the capture of Plaintiff’s biometric information, and the Court’s description of that capture as resulting in a permanent record. While that statement is likely based on allegations made in the complaint, it is possible, or even probable, that it is not factually accurate. Although variations exist, the scanning technology used in many biometric timekeeping devices creates only a single permanent record — from the very first scan of the individual’s finger or hand. Commonly, the later scans do not collect or store information, but only exist fleetingly as comparisons of the permanent, initial scan data. As a result, the applicability of the Watson decision may vary based on the actual operation of the scanning devices at issue in any single case.

Second, in response to Defendants’ concerns about the “ruinous” monetary damage awards that may result from the ruling in Watson, the Appellate Court went out of its way to note “that damages are discretionary[,] not mandatory” under BIPA. In so holding, the Appellate Court found that Section 20 of BIPA provides a list of possible damages, but notes that list constitutes what a “prevailing party may recover.” 740 ILCS 14/20 (emphasis added). The Appellate Court’s decision to highlight the discretionary nature of an award of monetary damages under BIPA stands in stark contrast to the position often taken by the plaintiffs’ bar. Indeed, the plaintiffs’ bar consistently asserts that the right to recover liquidated damages under BIPA is absolute given the Illinois Supreme Court’s 2019 decision in Rosenbach v. Six Flags Entm’t Corp. However, the Rosenbach decision merely found that once a plaintiff meets the basic statutory requirement of being “aggrieved,” he or she is merely “entitled to seek recovery” under Section 20. The Watson Court’s emphasis that monetary damages are discretionary under BIPA is likely to open new lines of discovery and argument regarding the calculation of damages, if any, sustained by a particular BIPA plaintiff and whether or not those damages justify the imposition of discretionary liquidated damages set forth in the Act.

Ultimately, every business should perform a critical analysis as to any business practice that potentially concerns biometrics (including employee timekeeping, identification procedures or security protocols). The failure to fully comply with BIPA, even when such a failure results in no actual injury to an individual, may lead to significant liability. Vedder Price attorneys are at the forefront in defending BIPA claims and counseling clients on BIPA-related policy and disclosure language.

© 2022 Vedder Price

For more articles on BIPA, visit the NLR section Cybersecurity, Media & FCC section.

BREAKING: Seventh Circuit Certifies BIPA Accrual Question to Illinois Supreme Court in White Castle

Yesterday the Seventh Circuit issued a much awaited ruling in the Cothron v. White Castle litigation, punting to the Illinois Supreme Court on the pivotal question of when a claim under the Illinois Biometric Privacy Act (“BIPA”) accrues.  No. 20-3202 (7th Cir.).  Read on to learn more and what it may mean for other biometric and data privacy litigations.

First, a brief recap of the facts of the dispute.  After Plaintiff started working at a White Castle in Illinois in 2004, White Castle began using an optional, consent-based finger-scan system for employees to sign documents and access their paystubs and computers.  Plaintiff consented in 2007 to the collection of her biometric data and then 11 years later—in 2018—filed suit against White Castle for purported violation of BIPA.

Plaintiff alleged that White Castle did not obtain consent to collect or disclose her fingerprints at the first instance the collection occurred under BIPA because BIPA did not exist in 2007.  Plaintiff asserted that she was “required” to scan her finger each time she accessed her work computer and weekly paystubs with White Castle and that her prior consent to the collection of biometric data did not satisfy BIPA’s requirements.  According to Plaintiff, White Castle violated BIPA Sections 15(b) and 15(d) by collecting, then “systematically and automatically” disclosing her biometric information without adhering to BIPA’s requirements (she claimed she did not consent under BIPA to the collection of her information until 2018). She sought statutory damages for “each” violation on behalf of herself and a putative class.

White Castle before the district court had moved to dismiss the Complaint and for judgment on the pleadings—both of which motions were denied.  The district court sided with Plaintiff, holding that “[o]n the facts set forth in the pleadings, White Castle violated Section 15(b) when it first scanned [Plaintiff’s] fingerprint and violated Section 15(d) when it first disclosed her biometric information to a third party.”  The district court also held that under Section 20 of BIPA, Plaintiff could recover for “each violation.”  The court rejected White Castle’s argument that this was an absurd interpretation of the statute not in keeping with legislative intent, commenting that “[i]f the Illinois legislature agrees that this reading of BIPA is absurd, it is of course free to modify the statue” but “it is not the role of a court—particularly a federal court—to rewrite a state statute to avoid a construction that may penalize violations severely.”

White Castle filed an appeal of the district court’s ruling with the Seventh Circuit.  As presented by White Castle, the issue before the Seventh Circuit was “[w]hether, when conduct that allegedly violates BIPA is repeated, that conduct gives rise to a single claim under Sections 15(b) and 15(d) of BIPA, or multiple claims.”

In ruling yesterday this issue was appropriate for the Illinois Supreme Court, the Seventh Circuit held that “[w]hether a claim accrues only once or repeatedly is an important and recurring question of Illinois law implicating state accrual principles as applied to this novel state statute.  It requires authoritative guidance that only the state’s highest court can provide.”  Here, the accrual issue is dispositive for purposes of Plaintiffs’ BIPA claim.  As the Seventh Circuit recognized, “[t]he timeliness of the suit depends on whether a claim under the Act accrued each time [Plaintiff] scanned her fingerprint to access a work computer or just the first time.”

Interestingly, the Seventh Circuit drew a comparison to data privacy litigations outside the context of BIPA, stating that the parties’ “disagreement, framed differently, is whether the Act should be treated like a junk-fax statute for which a claim accrues for each unsolicited fax, [], or instead like certain privacy and reputational torts that accrue only at the initial publication of defamatory material.”

Several BIPA litigations have been stayed pending a ruling from the Seventh Circuit in White Castle and these cases will remain on pause going into 2022 pending a ruling from the Illinois Supreme Court.  While some had hoped for clarity on this area of BIPA jurisprudence by the end of the year, the Seventh Circuit’s ruling means that this litigation will remain a must-watch privacy case going forward.

Article By Kristin L. Bryan of Squire Patton Boggs (US) LLP

For more data privacy and cybersecurity legal news, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP

Patch Up – Log4j and How to Avoid a Cybercrime Christmas

A vulnerability so dangerous that Cybersecurity and Infrastructure (CISA) Director Jen Easterly called it “one of the most serious [she’s] seen in [her] entire career, if not the most serious” arrived just in time for the holidays. On December 10, 2021, CISA and the director of cybersecurity at the National Security Agency (NSA) began alerting the public of a critical vulnerability within the Apache Log4j Java logging framework. Civilian government agencies have been instructed to mitigate against the vulnerability by Christmas Eve, and companies should follow suit.

The Log4j vulnerability allows threat actors to remotely execute code both on-premises and within cloud-based application servers, thereby obtaining control of the impacted servers. CISA expects the vulnerability to affect hundreds of millions of devices. This is a widespread critical vulnerability and companies should quickly assess whether, and to what extent, they or their service providers are using Log4j.

Immediate Recommendations

  • Immediately upgrade all versions of Apache Log4j to 2.15.0.
  • Ask your service providers whether their products or environment use Log4j, and if so, whether they have patched to the latest version. Helpfully, CISA sponsors a community-sourced GitHub repository with a list of software related to the vulnerability as a reference guide.
  • Confirm your security operations are monitoring internet-facing systems for indicators of compromise.
  • Review your incident response plan and ensure all response team information is up to date.
  • If your company is involved in an acquisition, discuss the security steps taken within the target company to address the Log4j vulnerability.

The versatility of this vulnerability has already attracted the attention of malicious nation-state actors. For example, government-affiliated cybercriminals in Iran and China have a “wish list” (no holiday pun intended) of entities that they are aggressively targeting with the Log4j vulnerability. Due to this malicious nation-state activity, if your company experiences a ransomware attack related to the Log4j vulnerability, it is particularly important to pay attention to potential sanctions-related issues.

Companies with additional questions about the Log4j vulnerability and its potential impact on technical threats and potential regulatory scrutiny or commercial liability are encouraged to contact counsel.

© 2021 Bracewell LLP

In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues

The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.”  The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.

Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?

As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.

In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.

In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out.  Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.

At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world.  To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt.  Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.

Contemplating the metaverse, these are just a few of the legal issues that come to mind:

  • Personal Data, Privacy and Cybersecurity – Privacy and data security lawyers are already challenged with addressing the global concerns presented by varying international approaches to privacy and growing threats to data security. If the metaverse fulfills the hype and develops into a 3D web-based hub for our day-to-day lives, the volume of data that will be collected will be exponentially greater than the reams of data already collected, and the threats to that data will expand as well. Questions to consider will include:
    • Data and privacy – What’s collected? How sensitive is it? Who owns or controls it? The sharing of data will be the cornerstone of a seamless, interoperable environment where users and their digital personas and assets will be usable and tradeable across the different arenas of the metaverse.  How will the collection, sharing and use of such data be regulated?  What laws will govern the collection of data across the metaverse? The laws of a particular state?  Applicable federal privacy laws? The GDPR or other international regulations? Will there be a single overarching “privacy policy” governing the metaverse under a user and merchant agreement, or will there be varying policies depending on which realm of the metaverse you are in? Could some developers create a more “privacy-focused” experience or would the personal data of avatars necessarily flow freely in every realm? How will children’s privacy be handled and will there be “roped off,” adults-only spaces that require further authentication to enter? Will the concepts that we talk about today – “personal information” or “personally identifiable information” – carry over to a world where the scope of available information expands exponentially as activities are tracked across the metaverse?
    • Cybersecurity: How will cybersecurity be managed in the metaverse? What requirements will apply with respect to keeping data secure? How will regulation or site policies evolve to address deep fakes, avatar impersonation, trolling, stolen biometric data, digital wallet hacks and all of the other cyberthreats that we already face today and are likely to be exacerbated in the metaverse? What laws will apply and how will the various players collaborate in addressing this issue?
  • Technology Infrastructure: The metaverse will be a robust computing-intensive experience, highlighting the importance of strong contractual agreements concerning cloud computing, IoT, web hosting, and APIs, as well as software licenses and hardware agreements, and technology service agreements with developers, providers and platform operators involved in the metaverse stack. Performance commitments and service levels will take on heightened importance in light of the real-time interactions that users will expect. What is a meaningful remedy for a service level failure when the metaverse (or a part of the metaverse) freezes? A credit or other traditional remedy?  Lawyers and technologists will have to think creatively to find appropriate and practical approaches to this issue.  And while SaaS and other “as a service” arrangements will grow in importance, perhaps the entire process will spawn MaaS, or “Metaverse as a Service.”
  • Open Source – Open source, already ubiquitous, promises to play a huge role in metaverse development by allowing developers to improve on what has come before. Whether or not the obligations of common open source licenses will be triggered will depend on the technical details of implementation. It is also possible that new open source licenses will be created to contemplate development for the metaverse.
  • Quantum Computing – Quantum computing has dramatically increased the capabilities of computers and is likely to continue to do over the coming years. It will certainly be one of the technologies deployed to provide the computing speed to allow the metaverse to function. However, with the awesome power of quantum computing comes threats to certain legacy protections we use today. Passwords and traditional security protocols may be meaningless (requiring the development of post-quantum cryptography that is secure against both quantum and traditional computers). With raw, unchecked quantum computing power, the metaverse may be subject to manipulation and misuse. Regulation of quantum computing, as applied to the metaverse and elsewhere, may be needed.
  • Antitrust: Collaboration is a key to the success of the metaverse, as it is, by definition, a multi-tenant environment. Of course collaboration amongst competitors may invoke antitrust concerns. Also, to the extent that larger technology companies may be perceived as leveraging their position to assert unfair control in any virtual world, there may be additional concerns.
  • Intellectual Property Issues: A host of IP issues will certainly arise, including infringement, licensing (and breaches thereof), IP protection and anti-piracy efforts, patent issues, joint ownership concerns, safe harbors, potential formation of patent cross-licensing organizations (which also may invoke antitrust concerns), trademark and advertising issues, and entertaining new brand licensing opportunities. The scope of content and technology licenses will have to be delicately negotiated with forethought to the potential breadth of the metaverse (e.g., it’s easy to limit a licensee’s rights based on territory, for example, but what about for a virtual world with no borders or some borders that haven’t been drawn yet?). Rightsholders must also determine their particular tolerance level for unauthorized digital goods or creations. One can envision a need for a DMCA-like safe harbor and takedown process for the metaverse. Also, akin to the litigation that sprouted from the use of athletes’ or celebrities’ likenesses (and their tattoos) in videogames, it’s likely that IP issues and rights of publicity disputes will go way up as people’s virtual avatars take on commercial value in ways that their real human selves never did.
  • Content Moderation. Section 230 of the Communications Decency Act (CDA) has been the target of bipartisan criticism for several years now, yet it remains in effect despite its application in some distasteful ways. How will the CDA be applied to the metaverse, where the exchange of third party content is likely to be even more robust than what we see today on social media?  How will “bad actors” be treated, and what does an account termination look like in the metaverse? Much like the legal issues surrounding offensive content present on today’s social media platforms, and barring a change in the law, the same kinds of issues surrounding user-generated content will persist and the same defenses under Section 230 of the Communications Decency Act will be raised.
  • Blockchain, DAOs, Smart Contract and Digital Assets: Since the metaverse is planned as a single forum with disparate operators and users, the use of a blockchain (or blockchains) would seem to be one solution to act as a trusted, immutable ledger of virtual goods, in-world currencies and identity authentication, particularly when interactions may be somewhat anonymous or between individuals who may or may not trust each other and in the absence of a centralized clearinghouse or administrator for transactions. The use of smart contracts may be pervasive in the metaverse.  Investors or developers may also decide that DAOs (decentralized autonomous organizations) can be useful to crowdsource and fund opportunities within that environment as well.  Overall, a decentralized metaverse with its own discrete economy would feature the creation, sale and holding of sovereign digital assets (and their free use, display and exchange using blockchain-based payment networks within the metaverse). This would presumably give NFTs a role beyond mere digital collectibles and investment opportunities as well as a role for other forms of digital currency (e.g., cryptocurrency, utility tokens, stablecoins, e-money, virtual “in game” money as found in some videogames, or a system of micropayments for virtual goods, services or experiences).  How else will our avatars be able to build a new virtual wardrobe for what is to come?

With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse?  Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.

Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators.  Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.

Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by  Jonathan Mollod.

For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.

© 2021 Proskauer Rose LLP.

Privacy Tip #309 – Women Poised to Fill Gap of Cybersecurity Talent

I have been advocating for gender equality in Cybersecurity for years [related podcast and post].

The statistics on the participation of women in the field of cybersecurity continue to be bleak, despite significant outreach efforts, including “Girls Who Code” and programs to encourage girls to explore STEM (Science, Technology, Engineering and Mathematics) subjects.

Women are just now rising to positions from which they can help other women break into the field, land high-paying jobs, and combat the dearth of talent in technology. Judy Dinn, the new Chief Information Officer of TD Bank NA, is doing just that. One of her priorities is to encourage women to pursue tech careers. She recently told the Wall Street Journal that she “really, really always wants to make sure that female representation—whether they’re in grade school, high school, universities—that that funnel is always full.”

The Wall Street Journal article states that a study by AnitaB.org found that “women made up about 29% of the U.S. tech workforce in 2020.”  It is well known that companies are fighting for tech and cybersecurity talent and that there are many more open positions than talent to fill them. The tech and cybersecurity fields are growing with unlimited possibilities.

This is where women should step in. With increased support, and prioritized recruiting efforts that encourage women to enter fields focused on technology, we can tap more talent and begin to fill the gap of cybersecurity talent in the U.S.

Article By Linn F. Freedman of Robinson & Cole LLP

For more privacy and cybersecurity legal news, click here to visit the National Law Review.

Copyright © 2021 Robinson & Cole LLP. All rights reserved.

Continuing Effort to Protect National Security Data and Networks

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Evolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

© 2021 Bradley Arant Boult Cummings LLP

For more articles about cybersecurity, visit the NLR Cybersecurity, Media & FCC section.

OFAC Reaffirms Focus on Virtual Currency With Updated Sanctions Law Guidance

On October 15, 2021, the US Department of the Treasury’s Office of Foreign Asset Control (OFAC) announced updated guidance for virtual currency companies in meeting their obligations under US sanctions laws. On the same day, OFAC also issued guidance clarifying various cryptocurrency-related definitions.

Coming on the heels of the Anti-Money Laundering Act of 2020—and in the context of the Biden administration’s effort to crackdown on ransomware attacks—the recent guidance is the latest indication that regulators are increasingly focusing on virtual currency and blockchain. In light of these developments, virtual currency market participants and service providers should ensure they are meeting their respective sanctions obligations by employing a “risk-based” anti-money laundering and sanctions compliance program.

This update highlights the government’s continued movement toward subjecting the virtual currency industry to the same requirements, scrutiny and consequences in cases of noncompliance as applicable to traditional financial institutions.

IN DEPTH

The release of OFAC’s Sanctions Compliance Guidance for the Virtual Currency Industry indicates an increasing expectation for diligence as it has now made clear on several occasions that sanctions compliance “obligations are the same” for virtual currency companies who must employ an unspecified “risk-based” program (See: OFAC Consolidated Frequently asked Questions 560). OFAC published it with the stated goal of “help[ing] the virtual currency industry prevent exploitation by sanctioned persons and other illicit actors.”

With this release, OFAC also provided some answers and updates to two of its published sets of “Frequently Asked Questions.”

FAQ UPDATES (FAQ 559 AND 546)

All are required to comply with the US sanctions compliance program, including persons and entities in the virtual currency and blockchain community. OFAC has said time and again that a “risk-based” program is required but that “there is no single compliance program or solution suitable for all circumstances” (See: FAQ 560). While market participants and service providers in the virtual currency industry must all comply, the risk of violating US sanctions are most acute for certain key service providers, such as cryptocurrency exchanges and over-the-counter (OTC) desks that facilitate large volumes of virtual currency transactions.

OFAC previously used the term “digital currency” when it issued its first FAQ and guidance on the subject (FAQ 560), which stated that sanctions compliance is applicable to “digital currency” and that OFAC “may include as identifiers on the [Specially Designated Nationals and Blocked Persons] SDN List specific digital currency addresses associated with blocked persons.” Subsequently, OFAC placed certain digital currency addresses on the SDN List as identifiers.

While OFAC previously used the term “digital currency,” in more recent FAQs and guidance, it has used a combination of the terms “digital currency” and “virtual currency” without defining those terms until it released FAQ 559.

In FAQ 559, OFAC defines “virtual currency” as “a digital representation of value that functions as (i) a medium of exchange; (ii) a unit of account; and/or (iii) a store of value; and is neither issued nor granted by any jurisdiction.” This is a broad definition but likely encompasses most assets, which are commonly referred to as “cryptocurrency” or “tokens,” as most of these assets may be considered as “mediums of exchange.”

OFAC also defines “digital currency” as “sovereign cryptocurrency, virtual currency (non-fiat), and a digital representation of fiat currency.” This definition appears to be an obvious effort by OFAC to make clear that its definitions include virtual currencies issued or backed by foreign governments and stablecoins.

The reference to “sovereign cryptocurrency” is focused on cryptocurrency issued by foreign governments, such as Venezuela. This is not the first time OFAC has focused on sovereign cryptocurrency. It ascribed the use of sovereign backed cryptocurrencies as a high-risk vector for US sanctions circumvention. Executive Order (EO) 13827, which was issued on March 19, 2018, explicitly stated:

In light of recent actions taken by the Maduro regime to attempt to circumvent U.S. sanctions by issuing a digital currency in a process that Venezuela’s democratically elected National Assembly has denounced as unlawful, hereby order as follows: Section 1. (a) All transactions related to, provision of financing for, and other dealings in, by a United States person or within the United States, and digital currency, digital coin, or digital token, that was issued by, for, or on behalf of the Government of Venezuela on or after January 9, 2018, are prohibited as of the effective date of this order.

On March 19, 2018, OFAC issued FAQs 564, 565 and 566, which were specifically focused on Venezuela issued cryptocurrencies, stating that “petro” and “petro gold” are considered a “digital currency, digital coin, or digital token” subject to EO 13827. While OFAC has not issued specific FAQs or guidance on other sovereign backed cryptocurrencies, it may be concerned that a series of countries have stated publicly that they plan to test and launch sovereign backed securities, including Russia, Iran, China, Japan, England, Sweden, Australia, the Netherlands, Singapore and India. With the release if its most recent FAQs, OFAC is reaffirming that it views sovereign cryptocurrencies as highly risky and well within the scope of US sanctions programs.

The reference to a “digital representation of fiat currency” appears to be a reference to “stablecoins.” In theory, stablecoins are each worth a specified value in fiat currency (usually one USD each). Most stablecoins were touted as being completely backed by fiat currency stored in segregated bank accounts. The viability and safety of stablecoins, however, has recently been called into question. One of the biggest players in the stablecoin industry is Tether, who was recently fined $41 million by the US Commodities Futures Trading Commission for failing to have the appropriate fiat reserves backing its highly popular stablecoin US Dollar Token (USDT). OFAC appears to have taken notice and states in its FAQ that “digital representations of fiat currency” are covered by its regulations and FAQs.

FAQ 646 provides some guidance on how cryptocurrency exchanges and other service providers should implement a “block” on virtual currency. Any US persons (or persons subject to US jurisdiction), including financial institutions, are required under US sanctions programs to “block” assets, which requires freezing assets and notifying OFAC within 10 days. (See: 31 C.F.R. § 501.603 (b)(1)(i).) FAQ 646 makes clear that “blocking” obligations applies to virtual currency and also indicates that OFAC expects cryptocurrency exchanges and other service providers be required to “block” the virtual currency at issue and freeze all other virtual currency wallets “in which a blocked person has an interest.”

Depending on the strength of the anti-money laundering/know-your-customer (AML/KYC) policies employed, it will likely prove difficult for cryptocurrency exchanges and other service providers to be sure that they have identified all associated virtual currency wallets in which a “blocked person has an interest.” It is possible that a cryptocurrency exchange could onboard a customer who complied with an appropriate risk-based AML/KYC policy and, unbeknownst to the cryptocurrency exchange, a blocked person “has an interest” in one of the virtual currency wallets. It remains to be seen how OFAC will employ this “has an interest” standard and whether it will take any cryptocurrency exchanges or other service providers to task for not blocking virtual currency wallets in which a blocked person “has an interest.” It is important for cryptocurrency exchanges or other service providers to implement an appropriate risk-based AML/KYC policy to defend any inquiries from OFAC as to whether it has complied with the various US sanctions programs, including by having the ability to identify other virtual currency wallets in which a blocked person “has an interest.”

UPDATED SANCTIONS COMPLIANCE GUIDANCE

OFAC’s recent framework for OFAC Compliance Commitments outlines five essential components for a virtual currency operator’s sanctions compliance program. These components generally track those applicable to more traditional financial institutions and include:

  1. Senior management should ensure that adequate resources are devoted to the support of compliance, that a competent sanctions compliance officer is appointed and that adequate independence is granted to the compliance unit to carry out their role.
  2. An operative risk assessment should be fashioned to reflect the unique exposure of the company. OFAC maintains both a public use sanctions list and a free search tool for that list which should be employed to identify and prevent sanctioned individuals and entities from accessing the company’s services.
  3. Internal controls must be put in place that address the unique risks recognized by the company’s risk assessment. OFAC does not have a specific software or hardware requirement regarding internal controls.
    1. Although OFAC does not specify required internal controls, it does provide recommended best practices. These include geolocation tools with IP address blocking controls, KYC procedures for both individuals and entities, transaction monitoring and investigation software that can review historically identified bad actors, the implementation of remedial measures upon internal discovery of weakness in sanction compliance, sanction screening and establishing risk indicators or red flags that require additional scrutiny when triggered.
    2. Additionally, information should be obtained upon the formation of each new customer relationship. A formal due diligence plan should be in place and operated sufficiently to alert the service provider to possible sanctions-related alarms. Customer data should be maintained and updated through the lifecycle of that customer relationship.
  4. To ensure an entity’s sanctions compliance program is effective and efficient, that entity should regularly test their compliance against independent objective testing and auditing functions.
  5. Proper training must be provided to a company’s workforce. For a company’s sanctions compliance program to be effective, its workforce must be properly outfitted with the hard and soft skills required to execute its compliance program. Although training programs may vary, OFAC training should be provided annually for all employees.

KEY TAKEAWAYS

As noted in OFAC’s press release issued simultaneously with the updated FAQ’s, “[t]hese actions are a part of the Biden Administration’s focused, integrated effort to counter the ransomware threat.” The Biden administration’s increased focus on regulatory and enforcement action in the virtual currency space highlights the importance for market participants and service providers to implement a robust compliance program. Cryptocurrency exchanges and other service providers must take special care in drafting and implementing their respective AML/KYC policies and in ensuring the existence of risk-based AML and sanctions compliance programs, which includes a periodic training program. When responding to inquiries from OFAC or other regulators, it will be critical to have documented evidence of the implementation of a risk-based AML/KYC program and proof that employees have been appropriately trained on all applicable policies, including a sanctions compliance policy.

Ethan Heller, a law clerk in the firm’s New York office, also contributed to this article.

© 2021 McDermott Will & Emery
For the latest in Financial, Securities, and Banking legal news, read more at the National Law Review.

Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels

On Sunday, October 3rd, Facebook whistleblower Frances Haugen publicly revealed her identity on the CBS television show 60 Minutes. Formerly a member of Facebook’s civic misinformation team, she previously reported them to the Securities and Exchange Commission (SEC) for a variety of concerning business practices, including lying to investors and amplifying the January 6th Capitol Hill attack via Facebook’s platform.

Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.

What are the Legal Implications of Breaking a Non-Disclosure Agreement?

After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th.  By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.

“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company.  Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.

Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.

Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.

According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.

“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”

Legal Implications for Facebook & Whistleblowers

Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.

Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.

“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.

As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.

During Haugen’s testimony during  the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.

“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.

This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.

Ransom Demands: To Pay or Not to Pay?

As the threat of ransomware attacks against companies has skyrocketed, so has the burden on companies forced to decide whether to pay cybercriminals a ransom demand. Corporate management increasingly is faced with balancing myriad legal and business factors in making real-time, high-stakes “bet the company” decisions with little or no precedent to follow. In a recent advisory, the U.S. Department of the Treasury (Treasury) has once again discouraged companies from making ransom payments or risk potential sanctions.

OFAC Ransom Advisory

On September 21, 2021, the Treasury’s Office of Foreign Assets Control (OFAC) issued an Advisory that updates and supersedes OFAC’s Advisory on Potential Sanctions Risks for Facilitating Ransomware Payments, issued on October 1, 2020. This updated OFAC Advisory follows on the heels of the Biden Administration’s heightened interest in combating the growing risk and reality of cyber threats that may adversely impact national security and the economy.

According to Federal Bureau of Investigation (FBI) statistics from 2019 to 2020 on ransomware attacks, there was a 21 percent increase in reported ransomware attacks and a 225 percent increase in associated losses. All organizations across all industry sectors in the private and public arenas are potential targets of such attacks. As noted by OFAC, cybercriminals often target particularly vulnerable entities, such as schools and hospitals, among others.

While some cybercriminals are linked to foreign state actors primarily motivated by political interests, many threat actors are simply in it “for the money.” Every day cybercriminals launch ransomware attacks to wreak havoc on vulnerable organizations, disrupting their business operations by encrypting and potentially stealing their data. These cybercriminals often demand ransom payments in the millions of dollars in exchange for a “decryptor” key to unlock encrypted files and/or a “promise” not to use or publish stolen data on the Dark Web.

The recent OFAC Advisory states in no uncertain terms that the “U.S. government strongly discourages all private companies and citizens from paying ransom or extortion demands.” OFAC notes that such ransomware payments could be “used to fund activities adverse to the national security and foreign policy objectives of the United States.” The Advisory further states that ransom payments may perpetuate future cyber-attacks by incentivizing cybercriminals. In addition, OFAC cautions that in exchange for payments to cybercriminals “there is no guarantee that companies will regain access to their data or be free from further attacks.”

The OFAC Advisory also underscores the potential risk of violating sanctions associated with ransom payments by organizations. As a reminder, various U.S. federal laws, including the International Emergency Economic Powers Act and the Trading with the Enemy Act, prohibit U.S. persons or entities from engaging in financial or other transactions with certain blacklisted individuals, organizations or countries – including those listed on OFAC’s Specially Designated Nationals and Blacked Persons List or countries subject to embargoes (such as Cuba, the Crimea region of the Ukraine, North Korea and Syria).

Penalties & Mitigating Factors

If a ransom payment is deemed to have been made to a cybercriminal with a nexus to a blacklisted organization or country, OFAC may impose civil monetary penalties for violations of sanctions based on strict liability, even if a person or organization did not know it was engaging in a prohibited transaction.

However, OFAC will consider various mitigating factors in deciding whether to impose penalties against organizations for sanctioned transactions, including if the organizations adopted enhanced cybersecurity practices to reduce the risk of cyber-attacks, or promptly reported ransomware attacks to law enforcement and regulatory authorities (including the FBI, U.S. Secret Service and/or Treasury’s Office of Cybersecurity and Critical Infrastructure Protection).

“OFAC also will consider a company’s full and ongoing cooperation with law enforcement both during and after a ransomware attack” as a “significant” mitigating factor. In encouraging organizations to self-report ransomware attacks to federal authorities, OFAC notes that information shared with law enforcement may aid in tracking cybercriminals and disrupting or preventing future attacks.

Conclusion

In short, payment of a ransom is not illegal per se, so long as the transaction does not involve a sanctioned party on OFAC’s blacklist. Moreover, the recent ransomware Advisory “is explanatory only and does not have the force of law.” Nonetheless, organizations should consider carefully OFAC’s advice and guidance in deciding whether to pay a ransom demand.

In addition to the OFAC Advisory, management should consider the following:

  • Ability to restore systems from viable (unencrypted) backups

  • Marginal time savings in restoring systems with a decryptor versus backups

  • Preservation of infected systems in order to conduct a forensics investigation

  • Ability to determine whether data was accessed or exfiltrated (stolen)

  • Reputational harm if data is published by the threat actor

  • Likelihood that the organization will be legally required to notify individuals of the attack regardless of whether their data is published on the Dark Web.

Should an organization decide it has no choice other than to make a ransom payment, it should facilitate the transaction through a reputable company that first performs and documents an OFAC sanctions check.

© 2021 Wilson Elser

For more articles about ransomware attacks, visit the NLR Cybersecurity, Media & FCC section.

Illinois Appellate Panel Splits the Difference for BIPA Statute of Limitations in Closely Watched Decision

Currently pending before the Seventh Circuit Court of Appeals is the important question of when a claim under the Illinois Biometric Information Privacy Act (“BIPA”) accrues.  Cothron v. White Castle, No. 20-3202 (7th Cir.)  In another litigation CPW previously identified, a panel for the Illinois Court of Appeals recently addressed whether BIPA claims are potentially subject to a one-, two-, or five-year statute of limitations.  Tims v. Black Horse Carriers, Inc., 2021 IL App (1st) 200563 (Sep. 17, 2021).  The answer is apparently “it depends,” based on the particular claims a plaintiff asserts under the statute.

The underlying facts of the case, as with many BIPA litigations, arose in the employer-employee context.  Plaintiff filed a putative class action Complaint in March 2019.  Plaintiff alleged that he worked for Defendant from June 2017 until January 2018. Plaintiff alleged that Defendant “scanned and was still scanning the fingerprints of all employees, including Plaintiff, and was using and had used fingerprint scanning in its employee timekeeping,” in violation of BIPA.

Count I of the Complaint alleged that Defendant violated Section 15(a) of BIPA by failing to institute, maintain, and adhere to a retention schedule for biometric data.  Count II of the alleged that Defendant violated BIPA Section 15(b) by failing to obtain an informed written consent and release before obtaining biometric data. Finally, Count III of the Complaint alleged that Defendant violated BIPA Section 15(d) by disclosing or disseminating biometric data without first obtaining consent.

Defendant subsequently moved to dismiss the Complaint in its entirety, asserting that Plaintiff’s Complaint was filed outside BIPA’s limitation period.  The motion noted that BIPA itself has no limitation provision and argued that the one-year limitation period for privacy actions under Illinois Code Section 13-201 applies to causes of action under the BIPA.

Plaintiff opposed, arguing that: (1) BIPA’s purpose is (in part) to prevent or deter security breaches regarding biometric data and therefore (2) in the absence of a limitation period expressly contained in BIPA itself, the five-year period in Illinois Code Section 13-205 for all civil actions not otherwise provided for should apply.  Plaintiff also argued that the one-year limitations period applied to actions only involving publication of information—which was not implicated for all claims under BIPA

The statute of limitations issue was eventually certified to a panel of the Illinois Court of Appeals.  The Court noted at the onset that Section 15 of BIPA “imposes various duties upon which an aggrieved person may bring an action” and “[t]hough all relate to protecting biometric data, each duty is separate and distinct.”

The Court ultimately found the publication-based distinction raised in the parties’ briefing a useful construct for categorizing claims under BIPA: “[a] plaintiff could therefore bring an action under the Act alleging violations of section 15(a), (b), and/or (e) without having to allege or prove that the defendant private entity published or disclosed any biometric data to any person or entity beyond or outside itself.  Stated another way, an action under section 15(a), (b), or (e) of the Act is not an action ‘for publication of matter violating the right of privacy.’” (quotation omitted).

The end result reached was that the Court held Section 13-201 (the one-year limitations period) governs BIPA actions under Section 15(c) and (d) while Section 13-205 (the five-year limitations period) governs BIPA actions under Sections 15(a), (b), and (e).

Although the shorter limitations period adopted for BIPA claims under Section 15(c) and 15(d) is a welcome ruling for defendants named in BIPA class actions, this ruling will have a limited impact on pending and future-filed BIPA cases.  This is because with the statute’s generous liquidated damages, class actions (even if defined depending on the claim asserted to include only a 1-year period) will still potentially bring a significant payoff for determined class counsel.  The bigger question—pending before the Seventh Circuit—is when BIPA claims accrue in the first place.  For more on this, stay tuned.  CPW will be there to keep you in the loop.

© Copyright 2021 Squire Patton Boggs (US) LLP


For more on BIPA, visit the NLR Communications, Media & Internet section.