Fitness App Agrees to Pay $56 Million to Settle Class Action Alleging Dark Pattern Practices

On February 14, 2022, Noom Inc., a popular weight loss and fitness app, agreed to pay $56 million, and provide an additional $6 million in subscription credits to settle a putative class action in New York federal court. The class is seeking conditional certification and has urged the court to preliminarily approve the settlement.

The suit was filed in May 2020 when a group of Noom users alleged that Noom “actively misrepresents and/or fails to accurately disclose the true characteristics of its trial period, its automatic enrollment policy, and the actual steps customer need to follow in attempting to cancel a 14-day trial and avoid automatic enrollment.” More specifically, users alleged that Noom engaged in an unlawful auto-renewal subscription business model by luring customers in with the opportunity to “try” its programs, then imposing significant barriers to the cancellation process (e.g., only allowing customers to cancel their subscriptions through their virtual coach), resulting in the customers paying a nonrefundable advance lump-sum payment for up to eight (8) months at a time. According to the proposed settlement, Noom will have to substantially enhance its auto-renewal disclosures, as well as require customers to take a separate action (e.g., check box or digital signature) to accept auto-renewal, and provide customers a button on the customer’s account page for easier cancellation.

Regulators at the federal and state level have recently made clear their focus on enforcement actions against “dark patterns.” We previously summarized the FTC’s enforcement policy statement from October 2021 warning companies against using dark patterns that trick consumers into subscription services. More recently, several state attorneys general (e.g., in Indiana, Texas, the District of Columbia, and Washington State) made announcements regarding their commitment to ramp up enforcement work on “dark patterns” that are used to ascertain consumers’ location data.

Article By: Privacy and Cybersecurity Practice Group at Hunton Andrews Kurth

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Texas AG Sues Meta Over Collection and Use of Biometric Data

On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.

The suit focuses on Meta’s “tag suggestions” feature, which the company has since retired. The feature scanned faces in users’ photos and videos to suggest “tagging” (i.e., identify by name) users who appeared in the photos and videos. In the complaint, Attorney General Ken Paxton alleged that Meta,  collected and analyzed individuals’ facial geometry data (which constitutes biometric data under CUBI) without their consent, shared the data with third parties, and failed to destroy the data in a timely matter, all in violation of CUBI and the DTPA. CUBI regulates the collection and use of biometric data for commercial purposes, and the DTPA prohibits false, misleading, or deceptive acts or practices in the conduct of any trade or commerce.

Among other forms of relief, the complaint seeks an injunction enjoining Meta from violating these laws, a $25,000 civil penalty for each violation of CUBI, and a $10,000 civil penalty for each violation of the DTPA. The suit follows Facebook’s $650 million class-action settlement over alleged violations of Illinois’ Biometric Privacy Act and the company’s discontinuance of the tag suggestions feature last year.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

New Poll Underscores Growing Support for National Data Privacy Legislation

Over half of all Americans would support a federal data privacy law, according to a recent poll from Politico and Morning Consult. The poll found that 56 percent of registered voters would either strongly or somewhat support a proposal to “make it illegal for social media companies to use personal data to recommend content via algorithms.” Democrats were most likely to support the proposal at 62 percent, compared to 54 percent of Republicans and 50 percent of Independents. Still, the numbers may show that bipartisan action is possible.

The poll is indicative of American’s increasing data privacy awareness and concerns. Colorado, Virginia, and California all passed or updated data privacy laws within the last year, and nearly every state is considering similar legislation. Additionally, Congress held several high-profile hearings last year soliciting testimony from several tech industry leaders and whistleblower Frances Haugen. In the private sector, Meta CEO Mark Zuckerberg has come out in favor of a national data privacy standard similar to the EU’s General Data Protection Regulation (GDPR).

Politico and Morning Consult released the poll results days after Senator Ron Wyden (D-OR) accepted a 24,000-signature petition calling for Congress to pass a federal data protection law. Senator Wyden, who recently introduced his own data privacy proposal called the “Mind Your Own Business Act,” said it was “past time” for Congress to act.

He may be right: U.S./EU data flows have been on borrowed time since 2020. The GDPR prohibits data flows from the EU to countries with inadequate data protection laws, including the United States. The U.S. Privacy Shield regulations allowed the United States to circumvent the rule, but an EU court invalidated the agreement in 2020, and data flows between the US and the EU have been in legal limbo ever since. Eventually, Congress and the EU will need to address the situation and a federal data protection law would be a long-term solution.

This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law. Click here to read more about the Data Privacy and Cybersecurity practice at Robinson & Cole LLP.

For more data privacy and cybersecurity news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

BREAKING: Seventh Circuit Certifies BIPA Accrual Question to Illinois Supreme Court in White Castle

Yesterday the Seventh Circuit issued a much awaited ruling in the Cothron v. White Castle litigation, punting to the Illinois Supreme Court on the pivotal question of when a claim under the Illinois Biometric Privacy Act (“BIPA”) accrues.  No. 20-3202 (7th Cir.).  Read on to learn more and what it may mean for other biometric and data privacy litigations.

First, a brief recap of the facts of the dispute.  After Plaintiff started working at a White Castle in Illinois in 2004, White Castle began using an optional, consent-based finger-scan system for employees to sign documents and access their paystubs and computers.  Plaintiff consented in 2007 to the collection of her biometric data and then 11 years later—in 2018—filed suit against White Castle for purported violation of BIPA.

Plaintiff alleged that White Castle did not obtain consent to collect or disclose her fingerprints at the first instance the collection occurred under BIPA because BIPA did not exist in 2007.  Plaintiff asserted that she was “required” to scan her finger each time she accessed her work computer and weekly paystubs with White Castle and that her prior consent to the collection of biometric data did not satisfy BIPA’s requirements.  According to Plaintiff, White Castle violated BIPA Sections 15(b) and 15(d) by collecting, then “systematically and automatically” disclosing her biometric information without adhering to BIPA’s requirements (she claimed she did not consent under BIPA to the collection of her information until 2018). She sought statutory damages for “each” violation on behalf of herself and a putative class.

White Castle before the district court had moved to dismiss the Complaint and for judgment on the pleadings—both of which motions were denied.  The district court sided with Plaintiff, holding that “[o]n the facts set forth in the pleadings, White Castle violated Section 15(b) when it first scanned [Plaintiff’s] fingerprint and violated Section 15(d) when it first disclosed her biometric information to a third party.”  The district court also held that under Section 20 of BIPA, Plaintiff could recover for “each violation.”  The court rejected White Castle’s argument that this was an absurd interpretation of the statute not in keeping with legislative intent, commenting that “[i]f the Illinois legislature agrees that this reading of BIPA is absurd, it is of course free to modify the statue” but “it is not the role of a court—particularly a federal court—to rewrite a state statute to avoid a construction that may penalize violations severely.”

White Castle filed an appeal of the district court’s ruling with the Seventh Circuit.  As presented by White Castle, the issue before the Seventh Circuit was “[w]hether, when conduct that allegedly violates BIPA is repeated, that conduct gives rise to a single claim under Sections 15(b) and 15(d) of BIPA, or multiple claims.”

In ruling yesterday this issue was appropriate for the Illinois Supreme Court, the Seventh Circuit held that “[w]hether a claim accrues only once or repeatedly is an important and recurring question of Illinois law implicating state accrual principles as applied to this novel state statute.  It requires authoritative guidance that only the state’s highest court can provide.”  Here, the accrual issue is dispositive for purposes of Plaintiffs’ BIPA claim.  As the Seventh Circuit recognized, “[t]he timeliness of the suit depends on whether a claim under the Act accrued each time [Plaintiff] scanned her fingerprint to access a work computer or just the first time.”

Interestingly, the Seventh Circuit drew a comparison to data privacy litigations outside the context of BIPA, stating that the parties’ “disagreement, framed differently, is whether the Act should be treated like a junk-fax statute for which a claim accrues for each unsolicited fax, [], or instead like certain privacy and reputational torts that accrue only at the initial publication of defamatory material.”

Several BIPA litigations have been stayed pending a ruling from the Seventh Circuit in White Castle and these cases will remain on pause going into 2022 pending a ruling from the Illinois Supreme Court.  While some had hoped for clarity on this area of BIPA jurisprudence by the end of the year, the Seventh Circuit’s ruling means that this litigation will remain a must-watch privacy case going forward.

Article By Kristin L. Bryan of Squire Patton Boggs (US) LLP

For more data privacy and cybersecurity legal news, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP

In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues

The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.”  The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.

Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?

As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.

In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.

In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out.  Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.

At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world.  To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt.  Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.

Contemplating the metaverse, these are just a few of the legal issues that come to mind:

  • Personal Data, Privacy and Cybersecurity – Privacy and data security lawyers are already challenged with addressing the global concerns presented by varying international approaches to privacy and growing threats to data security. If the metaverse fulfills the hype and develops into a 3D web-based hub for our day-to-day lives, the volume of data that will be collected will be exponentially greater than the reams of data already collected, and the threats to that data will expand as well. Questions to consider will include:
    • Data and privacy – What’s collected? How sensitive is it? Who owns or controls it? The sharing of data will be the cornerstone of a seamless, interoperable environment where users and their digital personas and assets will be usable and tradeable across the different arenas of the metaverse.  How will the collection, sharing and use of such data be regulated?  What laws will govern the collection of data across the metaverse? The laws of a particular state?  Applicable federal privacy laws? The GDPR or other international regulations? Will there be a single overarching “privacy policy” governing the metaverse under a user and merchant agreement, or will there be varying policies depending on which realm of the metaverse you are in? Could some developers create a more “privacy-focused” experience or would the personal data of avatars necessarily flow freely in every realm? How will children’s privacy be handled and will there be “roped off,” adults-only spaces that require further authentication to enter? Will the concepts that we talk about today – “personal information” or “personally identifiable information” – carry over to a world where the scope of available information expands exponentially as activities are tracked across the metaverse?
    • Cybersecurity: How will cybersecurity be managed in the metaverse? What requirements will apply with respect to keeping data secure? How will regulation or site policies evolve to address deep fakes, avatar impersonation, trolling, stolen biometric data, digital wallet hacks and all of the other cyberthreats that we already face today and are likely to be exacerbated in the metaverse? What laws will apply and how will the various players collaborate in addressing this issue?
  • Technology Infrastructure: The metaverse will be a robust computing-intensive experience, highlighting the importance of strong contractual agreements concerning cloud computing, IoT, web hosting, and APIs, as well as software licenses and hardware agreements, and technology service agreements with developers, providers and platform operators involved in the metaverse stack. Performance commitments and service levels will take on heightened importance in light of the real-time interactions that users will expect. What is a meaningful remedy for a service level failure when the metaverse (or a part of the metaverse) freezes? A credit or other traditional remedy?  Lawyers and technologists will have to think creatively to find appropriate and practical approaches to this issue.  And while SaaS and other “as a service” arrangements will grow in importance, perhaps the entire process will spawn MaaS, or “Metaverse as a Service.”
  • Open Source – Open source, already ubiquitous, promises to play a huge role in metaverse development by allowing developers to improve on what has come before. Whether or not the obligations of common open source licenses will be triggered will depend on the technical details of implementation. It is also possible that new open source licenses will be created to contemplate development for the metaverse.
  • Quantum Computing – Quantum computing has dramatically increased the capabilities of computers and is likely to continue to do over the coming years. It will certainly be one of the technologies deployed to provide the computing speed to allow the metaverse to function. However, with the awesome power of quantum computing comes threats to certain legacy protections we use today. Passwords and traditional security protocols may be meaningless (requiring the development of post-quantum cryptography that is secure against both quantum and traditional computers). With raw, unchecked quantum computing power, the metaverse may be subject to manipulation and misuse. Regulation of quantum computing, as applied to the metaverse and elsewhere, may be needed.
  • Antitrust: Collaboration is a key to the success of the metaverse, as it is, by definition, a multi-tenant environment. Of course collaboration amongst competitors may invoke antitrust concerns. Also, to the extent that larger technology companies may be perceived as leveraging their position to assert unfair control in any virtual world, there may be additional concerns.
  • Intellectual Property Issues: A host of IP issues will certainly arise, including infringement, licensing (and breaches thereof), IP protection and anti-piracy efforts, patent issues, joint ownership concerns, safe harbors, potential formation of patent cross-licensing organizations (which also may invoke antitrust concerns), trademark and advertising issues, and entertaining new brand licensing opportunities. The scope of content and technology licenses will have to be delicately negotiated with forethought to the potential breadth of the metaverse (e.g., it’s easy to limit a licensee’s rights based on territory, for example, but what about for a virtual world with no borders or some borders that haven’t been drawn yet?). Rightsholders must also determine their particular tolerance level for unauthorized digital goods or creations. One can envision a need for a DMCA-like safe harbor and takedown process for the metaverse. Also, akin to the litigation that sprouted from the use of athletes’ or celebrities’ likenesses (and their tattoos) in videogames, it’s likely that IP issues and rights of publicity disputes will go way up as people’s virtual avatars take on commercial value in ways that their real human selves never did.
  • Content Moderation. Section 230 of the Communications Decency Act (CDA) has been the target of bipartisan criticism for several years now, yet it remains in effect despite its application in some distasteful ways. How will the CDA be applied to the metaverse, where the exchange of third party content is likely to be even more robust than what we see today on social media?  How will “bad actors” be treated, and what does an account termination look like in the metaverse? Much like the legal issues surrounding offensive content present on today’s social media platforms, and barring a change in the law, the same kinds of issues surrounding user-generated content will persist and the same defenses under Section 230 of the Communications Decency Act will be raised.
  • Blockchain, DAOs, Smart Contract and Digital Assets: Since the metaverse is planned as a single forum with disparate operators and users, the use of a blockchain (or blockchains) would seem to be one solution to act as a trusted, immutable ledger of virtual goods, in-world currencies and identity authentication, particularly when interactions may be somewhat anonymous or between individuals who may or may not trust each other and in the absence of a centralized clearinghouse or administrator for transactions. The use of smart contracts may be pervasive in the metaverse.  Investors or developers may also decide that DAOs (decentralized autonomous organizations) can be useful to crowdsource and fund opportunities within that environment as well.  Overall, a decentralized metaverse with its own discrete economy would feature the creation, sale and holding of sovereign digital assets (and their free use, display and exchange using blockchain-based payment networks within the metaverse). This would presumably give NFTs a role beyond mere digital collectibles and investment opportunities as well as a role for other forms of digital currency (e.g., cryptocurrency, utility tokens, stablecoins, e-money, virtual “in game” money as found in some videogames, or a system of micropayments for virtual goods, services or experiences).  How else will our avatars be able to build a new virtual wardrobe for what is to come?

With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse?  Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.

Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators.  Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.

Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by  Jonathan Mollod.

For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.

© 2021 Proskauer Rose LLP.

Colorado Privacy Act: New Protections for Consumers in the Centennial State

On July 1, 2023, the Colorado Privacy Act (CPA) will go into effect as the third state law generally governing consumer data privacy and was the second enacted in 2021.  If you do business with consumers in Colorado, regardless of your location, you should begin familiarizing yourself with the requirements of the CPA now.  While the CPA is similar to the California Privacy Rights Act (CRPA) and Virginia’s Consumer Data Privacy Act (VCDPA), certain elements distinguish the Colorado law from its counterparts.  Unlike the California law, the CPA does not apply to personal data in the employee or business-to-business relationship.  This client alert provides a breakdown of the general requirements and obligations on businesses and key distinctions with other state data privacy laws.

Covered Businesses and Applicability

Covered ControllersThe CPA applies to any business, called a “controller” under the statute, who “alone, or jointly with others, determines the purposes for and means of processing personal data,” and “conducts business in Colorado or produces or delivers commercial products or services that are intentionally targeted to residents of Colorado” and:

  • Controls or processes the personal data of 100,000 consumers or more during a calendar year; or
  • Derives revenue or receives a discount on the price of goods or services from the sale of personal data and processes or controls the personal data of 25,000 consumers or more.

There are a number of exemptions to the applicability provision that should be considered as part of the analysis of applicability.  First, the definition of consumers does not include “individual[s] acting in a commercial or employment context, as a job applicant, or as a beneficiary of someone acting in an employment context.” Second, the Act does not apply to certain types of personal data, as defined by the type of data, such as patient data, or as defined by the statute by which the collection and use of the data is regulated such as Gramm-Leach-Bliley.  Third, the Act does not apply to certain types of businesses, such as air carriers, public utilities (as defined by Colorado Law), or those subject to Gramm-Leach-Bliley. Notably, there is no revenue threshold requirement, meaning an applicability analysis begins by looking at the number of records processed.

Covered Individual To reiterate, the CPA does not apply to employee data, which, like the VCDPA means a consumer is a Colorado resident acting only in an individual or household context.

Personal DataThe CPA defines personal data as “information that is linked or reasonably linkable to an identified or identifiable individual,” but does not include “de-identified data or publicly available information,” including data “that a controller has a reasonable basis to believe the consumer has lawfully made available to the general public.”  This definition is similar to the VCDPA.

Controller and Processor Obligations

If the CPA is applicable to a controller then they, and their processors (a person that processes personal data on behalf of a controller) must adhere to a set of obligations.  The CPA sets out an analysis for determining whether a person is acting as a controller or a processor.

Obligations and Duties of Controllers

Under the Act, controllers must:

  • Implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk;
  • Comply with the duty of transparency by providing notice of the sale of personal data and the ability to opt out and by providing “a reasonably accessible, clear, and meaningful privacy notice” that includes:
    • Categories of personal data collected/processed;
    • Purpose(s) of processing;
    • How consumers may exercise rights and appeal controller’s response to consumer’s request;
    • Categories of personal data shared; and
    • Categories of third parties personal data is shared with;
  • Respond to the consumer’s exercise of their rights;
  • Comply with the duty of purpose specification;
  • Comply with the duty of data minimization;
  • Comply with the duty to avoid secondary use;
  • Comply with the duty of care that is appropriate to the volume, scope, and nature of the personal data processed.
  • Comply with the duty to avoid unlawful discrimination;
  • Process sensitive data only with the consent of the consumer. Sensitive data is “(a) personal data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition or diagnosis, sex life or sexual orientation, or citizenship or citizenship status; (b) genetic or biometric data that may be processed for the purpose of uniquely identifying an individual; or (c) personal data from a known child;”
  • Perform data protection assessments before beginning processing activities that present a heightened risk of harm to a consumer – certain situations of targeted advertising or profiling, selling personal data, and processing sensitive data are activities that present a heightened risk of harm; and
  • Engage processors only under a written contract, which shall include the type of personal data processed and other requirements under the CPA.

Obligations of Processors

Under the Act, processors must:

  • Assist controllers in meeting their obligations under the CPA;
  • Adhere to instructions of controller and assist controller in meeting those obligations, including security of processing and data breach notification;
  • Ensure a duty of confidentiality for each person processing personal data; and
  • Engage subcontractors pursuant to a written contract and only after providing the controller an opportunity to object.

Rights of Consumers

Like the VCDPA and CPRA, the CPA includes a suite of rights which consumers may request with respect to their personal data:

  • Right of access;
  • Right to correction;
  • Right to delete;
  • Right to data portability;
  • Right to opt out, including specifically  of targeted advertising or the sale of personal data; and
  • Right to appeal, including the right to contact the attorney general if the appeal is denied.

Within forty-five days of receipt of a request, a controller must respond by (a) taking action on the request, (b) extending the time for taking action up to an additional forty-five days, or (c) by not taking action and providing the instructions for an appeal.  Information provided under a first request within a 12 month period must be at no charge to the consumer.  Controller’s may implement processes to authenticate the identity of consumers requesting rights.

Enforcement of the CPA

There is no private right of action under the CPA with enforcement authority delegated to both the Colorado attorney general and district attorneys.  The CPA doubles the cure period granted to controllers provided under the VCDPA and CPRA to 60 days; however, the entitlement to a cure period will sunset on January 1, 2025.  Under the CPA a violation is a deceptive trade practice under the Colorado Consumer Protection Act, such that while the CPA does not specify a penalty amount, the Colorado Consumer Protection Act specifies a penalty of up to $20,000 per violation.

What’s Next

If the CPA is the first data protection legislation applicable to your organization, the time to transition your team– IT, marketing, legal – is now.  Delays in implementation are likely and could be costly.

 

This article was written by Lucy Tyson, Brittney E. Justice and Matthew G. Nielson of Bracewell law firm. For more articles regarding privacy legislation, please click here.

Continuing Effort to Protect National Security Data and Networks

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Evolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

© 2021 Bradley Arant Boult Cummings LLP

For more articles about cybersecurity, visit the NLR Cybersecurity, Media & FCC section.

Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels

On Sunday, October 3rd, Facebook whistleblower Frances Haugen publicly revealed her identity on the CBS television show 60 Minutes. Formerly a member of Facebook’s civic misinformation team, she previously reported them to the Securities and Exchange Commission (SEC) for a variety of concerning business practices, including lying to investors and amplifying the January 6th Capitol Hill attack via Facebook’s platform.

Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.

What are the Legal Implications of Breaking a Non-Disclosure Agreement?

After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th.  By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.

“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company.  Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.

Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.

Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.

According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.

“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”

Legal Implications for Facebook & Whistleblowers

Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.

Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.

“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.

As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.

During Haugen’s testimony during  the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.

“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.

This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.

“I always feel like somebody’s watching me…” The Legalities of Smart Devices and Privacy

“Hey Alexa…”

It’s a simple phrase that makes us feel like we’re living in the future promised us by The Jetsons and Star Trek. Alexa, Siri, Google Assistant—all Artificial Intelligence (AI) designed to make our lives just a little easier. Need a recipe for beef brisket? Just ask Siri. What time is the movie going to start? Ask Alexa. Need some music for your dinner party? Google Assistant has you covered, just ask. But how are Alexa and Siri at your beck and call? The answer is they’re always listening. What does that mean for you? It means that every sound they hear is analyzed and indexed.

Data Privacy

Privacy is the next big frontier in eDiscovery. Data privacy laws are constantly evolving. The General Data Protection Regulation (GDPR) (effective May 25, 2018) is the European Union (EU) and European Economic Area (EEA) law that relates to data protection and privacy. It also applies to the transfer of personal data outside of the EU and EEA. (The University of Michigan has a great timeline of the history of privacy law.) Practically, your personal data is the most valuable asset you have.

Understanding existing and pending privacy legislation is important. Currently 3 states have passed legislation, including California; 9 states, including Pennsylvania, have active bills; and 15 states have introduced legislation that ultimately died or was postponed. At some point there could be federal legislation that governs privacy similar to GDPR.

Do these devices violate wiretapping laws? Unclear.

An issue worth exploring is whether these devices fall under the purview of wiretapping laws. In Hall-O’Neil v. Amazon, a class action case in the Western District of Washington, Plaintiffs allege that Alexa enabled devices collected and recorded confidential conversations with minors. Hall-O’Neil v. Amazon.com Inc. et al., 2:19CV00910. It is important to keep an eye on these and other similar cases to understand the privacy issues at play with these types of devices.

How Do We Handle Evolving Privacy Issues in the Legal World?

So, what does this mean for legal professionals? One thing to consider is attorney-client privilege issues. With the global pandemic requiring a major shift to working from home you should carefully consider the ramifications of having a virtual assistant in your home while you’re working on client matters—you may be violating attorney-client privilege. Out of an abundance of caution you probably want to unplug your virtual assistant before getting to work.

On the flip side, if someone has a virtual assistant and it was present during a key meeting or event you might want to investigate subpoenaing the recordings, which carries with it additional issues such as who owns the data related to virtual assistants, how long is the data retained, and how do you obtain it.  Law enforcement agencies have been subpoenaing virtual assistant data for years to obtain voice clips and time stamped logs of user activity in crime investigations.

What’s the Best Practice?

With so many questions and so few real legal precedents it’s best to proceed with caution with the use of these devices. It’s also very important, from an eDiscovery perspective, to make sure you’re aware of the potential for important data to be found on these devices during the discovery process.

Article by Gretchen E. Moore, Lydia A. Gorba, Lynne Hewitt and Maryann Mahoney of Strassburger, McKenna Gutnick & Gefsky

For more articles on cybersecurity, please visit here.

©2021 Strassburger McKenna Gutnick & GefskyNational Law Review, Volume XI, Number 244

Privacy Tip #253 – Unemployment Fraud Claims Are Skyrocketing—What to Do if You Are a Victim?

I have received many questions this week on what to do if you are the victim of a fraudulent unemployment claim. It is unbelievable how many people I know who have become victims—yes—including myself.

It is disturbing that all of these fraudulent unemployment claims include the use of our full Social Security numbers. The other disturbing fact is that even if we have a security fraud alert or freeze on our credit accounts, those security freeze or fraud alerts don’t necessarily alert us in the event that a fraudulent unemployment claim is filed in our name.

The Federal Trade Commission (FTC) has recognized this rampant problem and issued tips this week to provide assistance to consumers who have been victimized by these fraudulent unemployment claim scams. The tips can be accessed here.


Copyright © 2020 Robinson & Cole LLP. All rights reserved.
For more articles on privacy, visit the National Law Review Communications, Media & Internet section.