Will an Act of War Destroy Your Cyberinsurance Coverage?

Cyberinsurance spurs many complaints from US business. The cost is skyrocketing, retentions (deductibles) are rising quickly, and the insurance companies push their own panel lawyers on customers despite other relationships. Ransomware or email fraud can be excluded from some policies.

But news of significant hacks drives more companies into the cyberinsurance market despite the costs. According to Bloomberg, cyberinsurance prices rose nearly 100% in 2021 and keep climbing. Travelers Insurance, working to justify the leaping costs of its products, lists the following reasons for higher cybersecurity prices: a wave of ransomware, rising breach response costs (from forensic and legal experts to ransom payments and regulatory fines), increasing tech complexity and budgets, inadequate cybersecurity hygiene (which is why better controls can now lead to lower insurance prices), lack of advance response plans, and business interruption expenses. Shutting down business operations may be a way for criminals to force ransom payments, but it also creates an expensive risk reduction system, and all companies are suffering from it.

However, for the price of protection, you would expect your insurance company to pay to remediate a properly-reported cyberattack.  Property insurers have long excluded “acts of war” from insurable damage that would receive payments. Most cyberinsurance policies have similar exclusions. This leads insurance customers to wonder, in a world where hackers and ransomware gangs from Russia and Ukraine initiate a significant percentage of cyberattacks, when would those attacks be considered “acts of war” during a real shooting war? If your company is smacked with ransomware from a Russian crew associated with the Kremlin, will your insurance company exclude the costs from your cyberinsurance policy as an act of war?

Lloyds of London just released a set of new exclusion clauses for addressing cyber war. These clauses are for underwriters to consider placing in Lloyds insurance contracts, and “have been drafted to provide Lloyd’s syndicates and their (re)insureds (and brokers) with options in respect of the level of cover provided for cyber operations between states which are not excluded by the definition of war, cyber war or cyber operations which have a major detrimental impact on a state.” Lloyds specifies that the “act of war” exemption language applies to China, France, Japan, Russia, the U.K and the U.S.  The new clauses supply underwriters with extensive leeway to refuse to pay claims.Importantly, Lloyds can decide that the attack was an act of war even if the attackers do not declare themselves. Pending any government attribution of an attacker, Lloyds can decide through reasonable inference to attribute any attack to state activities, and therefor falling within the “act of war” exclusion.

Property insurers have long excluded “acts of war” from insurable damage that would receive payments. Most cyberinsurance policies have similar exclusions. This leads insurance customers to wonder, in a world where hackers and ransomware gangs from Russia and Ukraine initiate a significant percentage of cyberattacks, when would those attacks be considered “acts of war” during a real shooting war? If your company is smacked with ransomware from a Russian crew associated with the Kremlin, will your insurance company exclude the costs from your cyberinsurance policy as an act of war?

TED CLAYPOOLE

All hope is not lost for businesses relying on cyberinsurance. Courts tend to hold insurers to high standards when trying to avoid paying out claims due to broadly-defined exclusions. For example, earlier this year the Superior Court of New Jersey rules that insurers can’t use a nation-state “act of war” cyber-exclusion to avoid covering more than a billion dollars in damages that Merck claimed it suffered from the NotPetya cyberattack in 2017. According to Insurance Journal, “ The insurers had tried to use the exclusions to avoid paying out, citing the fact the NotPetya malware was attributed to Russia and was meant to be deployed to disrupt and destabilize Ukraine. The malware wound up affecting thousands of companies worldwide. . . The cyber attack also attracted the attention of regulatory scrutiny of so-called “silent cyber” exposure in all policies.” The court “unhesitatingly” ruled that war exclusions did not apply in this instance.

So an attack from Russian hackers in 2021 may be covered under most cyberinsurance policies, but what about an attack in March of 2022? Does the state of hostility between the U.S. and Russian – in which Putin has claimed that sanctions against Russia and providing arms to Ukraine is an act of war – mean that ransomware attacks from the same Russian hackers may be considered acts of war? For example, the Conti ransomware gang has officially announced its full support of the Russian government after the invasion of Ukraine and threatened to use all possible researches to attack both Ukraine and Western countries that might support Ukraine. It would be easy for US critical infrastructure businesses to be direct victims of attacks from Russians supporting the Kremlin, or to be indirect victims of attacks aimed at Ukraine that spread through open networks like NotPetya or other malicious viruses. Where would that leave an affected company if its insurance provider refuses to pay, claiming an “act of war” exclusion?

We simply don’t know many insurance companies will use these policy exclusions and will be allowed to do so by U.S. courts. But each of us should check our cyber insurance policies for exclusions that could be triggered by current international conflicts.

Beyond insurance, international cyberattacks have straddled the line between standard crime and acts of international state hostility. Since the internet connected our world electronically, our societies have not set rules about how public and private actors are allowed to behave toward each other. Brad Smith, the President of Microsoft, has called for a Digital Geneva Convention, so that the nations of the world can agree what acts of electronic aggression are acceptable in war and even which acts should be considered to be acts of war. Maybe the current crisis, where a long-existing state is invaded without provocation, may be the catalyst to discuss digital hostility and set some rules around what kinds of interactions will be tolerated by the international community.

For now, check your cyberinsurance policies.  For posterity, push our politicians to create baseline rules for the digital world.  We have promulgated the law of the sea and the law of space. We should create a law of cyberspace as well.

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more articles on cyberinsurance for your workplace, visit the NLR Cybersecurity Media & FCC section.

EDPB on Dark Patterns: Lessons for Marketing Teams

“Dark patterns” are becoming the target of EU data protection authorities, and the new guidelines of the European Data Protection Board (EDPB) on “dark patterns in social media platform interfaces” confirm their focus on such practices. While they are built around examples from social media platforms (real or fictitious), these guidelines contain lessons for all websites and applications. The bad news for marketers: the EDPB doesn’t like it when dry legal texts and interfaces are made catchier or more enticing.

To illustrate, in a section of the guidelines regarding the selection of an account profile photo, the EDPB considers the example of a “help/information” prompt saying “No need to go to the hairdresser’s first. Just pick a photo that says ‘this is me.’” According to the EDPB, such a practice “can impact the final decision made by users who initially decided not to share a picture for their account” and thus makes consent invalid under the General Data Protection Regulation (GDPR). Similarly, the EDPB criticises an extreme example of a cookie banner with a humourous link to a bakery cookies recipe that incidentally says, “we also use cookies”, stating that “users might think they just dismiss a funny message about cookies as a baked snack and not consider the technical meaning of the term “cookies.”” The EDPB even suggests that the data minimisation principle, and not security concerns, should ultimately guide an organisation’s choice of which two-factor authentication method to use.

Do these new guidelines reflect privacy paranoia or common sense? The answer should lie somewhere in between, but the whole document (64 pages long) in our view suggests an overly strict approach, one that we hope will move closer to commonsense as a result of a newly started public consultation process.

Let us take a closer look at what useful lessons – or warnings – can be drawn from these new guidelines.

What are “dark patterns” and when are they unlawful?

According to the EDPB, dark patterns are “interfaces and user experiences […] that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data” (p. 2). They “aim to influence users’ behaviour and can hinder their ability to effectively protect their personal data and make conscious choices.” The risk associated with dark patterns is higher for websites or applications meant for children, as “dark patterns raise additional concerns regarding potential impact on children” (p. 8).

While the EDPB takes a strongly negative view of dark patterns in general, it recognises that dark patterns do not automatically lead to an infringement of the GDPR. The EDPB acknowledges that “[d]ata protection authorities are responsible for sanctioning the use of dark patterns if these breach GDPR requirements” (emphasis ours; p. 2). Nevertheless, the EDPB guidance strongly links the concept of dark patterns with the data protection by design and by default principles of Art. 25 GDPR, suggesting that disregard for those principles could lead to a presumption that the language or a practice in fact creates a “dark pattern” (p. 11).

The EDPB refers here to its Guidelines 4/2019 on Article 25 Data Protection by Design and by Default and in particular to the following key principles:

  • “Autonomy – Data subjects should be granted the highest degree of autonomy possible to determine the use made of their personal data, as well as autonomy over the scope and conditions of that use or processing.
  • Interaction – Data subjects must be able to communicate and exercise their rights in respect of the personal data processed by the controller.
  • Expectation – Processing should correspond with data subjects’ reasonable expectations.
  • Consumer choice – The controllers should not “lock in” their users in an unfair manner. Whenever a service processing personal data is proprietary, it may create a lock-in to the service, which may not be fair, if it impairs the data subjects’ possibility to exercise their right of data portability in accordance with Article 20 GDPR.
  • Power balance – Power balance should be a key objective of the controller-data subject relationship. Power imbalances should be avoided. When this is not possible, they should be recognised and accounted for with suitable countermeasures.
  • No deception – Data processing information and options should be provided in an objective and neutral way, avoiding any deceptive or manipulative language or design.
  • Truthful – the controllers must make available information about how they process personal data, should act as they declare they will and not mislead data subjects.”

Is data minimisation compatible with the use of SMS two-factor authentication?

One of the EDPB’s positions, while grounded in the principle of data minimisation, undercuts a security practice that has grown significantly over the past few years. In effect, the EDPB seems to question the validity under the GDPR of requests for phone numbers for two-factor authentication where e-mail tokens would theoretically be possible:

“30. To observe the principle of data minimisation, [organisations] are required not to ask for additional data such as the phone number, when the data users already provided during the sign- up process are sufficient. For example, to ensure account security, enhanced authentication is possible without the phone number by simply sending a code to users’ email accounts or by several other means.
31. Social network providers should therefore rely on means for security that are easier for users to re[1]initiate. For example, the [organisation] can send users an authentication number via an additional communication channel, such as a security app, which users previously installed on their mobile phone, but without requiring the users’ mobile phone number. User authentication via email addresses is also less intrusive than via phone number because users could simply create a new email address specifically for the sign-up process and utilise that email address mainly in connection with the Social Network. A phone number, however, is not that easily interchangeable, given that it is highly unlikely that users would buy a new SIM card or conclude a new phone contract only for the reason of authentication.” 
(emphasis ours; p. 15)

The EDPB also appears to be highly critical of phone-based verification in the context of registration “because the email address constitutes the regular contact point with users during the registration process” (p. 15).

This position is unfortunate, as it suggests that data minimisation may preclude controllers from even assessing which method of two-factor authentication – in this case, e-mail versus SMS one-time passwords – better suits its requirements, taking into consideration the different security benefits and drawbacks of the two methods. The EDPB’s reasoning could even be used to exclude any form of stronger two-factor authentication, as additional forms inevitably require separate processing (e.g., phone number or third-party account linking for some app-based authentication methods).

For these reasons, organisations should view this aspect of the new EDPB guidelines with a healthy dose of skepticism. It likewise will be important for interested stakeholders to participate in the consultation to explain the security benefits of using phone numbers to keep the “two” in two-factor authentication.

Consent withdrawal: same number of clicks?

Recent decisions by EU regulators (notably two decisions by the French authority, the CNIL have led to speculation about whether EU rules effectively require website operators to make it possible for data subjects to withdraw consent to all cookies with one single click, just as most websites make it possible to give consent through a single click. The authorities themselves have not stated that this is unequivocally required, although privacy activists notably filed complaints against hundreds of websites, many of them for not including a “reject all” button on their cookie banner.

The EDPB now appears to side with the privacy activists in this respect, stating that “consent cannot be considered valid under the GDPR when consent is obtained through only one mouse-click, swipe or keystroke, but the withdrawal takes more steps, is more difficult to achieve or takes more time” (p. 14).

Operationally, however, it seems impossible to comply with a “one-click withdrawal” standard in absolute terms. Just pulling up settings after registration or after the first visit to a website will always require an extra click, purely to open those settings. We expect this issue to be examined by the courts eventually.

Is creative wording indicative of a “dark pattern”?

The EDPB’s guidelines contain several examples of wording that is intended to convince the user to take a specific action.

The photo example mentioned in the introduction above is an illustration, but other (likely fictitious) examples include the following:

  • For sharing geolocation data: “Hey, a lone wolf, are you? But sharing and connecting with others help make the world a better place! Share your geolocation! Let the places and people around you inspire you!” (p.17)
  • To prompt a user to provide a self-description: “Tell us about your amazing self! We can’t wait, so come on right now and let us know!” (p. 17)

The EDPB criticises the language used, stating that it is “emotional steering”:

“[S]uch techniques do not cultivate users’ free will to provide their data, since the prescriptive language used can make users feel obliged to provide a self-description because they have already put time into the registration and wish to complete it. When users are in the process of registering to an account, they are less likely to take time to consider the description they give or even if they would like to give one at all. This is particularly the case when the language used delivers a sense of urgency or sounds like an imperative. If users feel this obligation, even when in reality providing the data is not mandatory, this can have an impact on their “free will”” (pp. 17-18).

Similarly, in a section about account deletion and deactivation, the EDPB criticises interfaces that highlight “only the negative, discouraging consequences of deleting their accounts,” e.g., “you’ll lose everything forever,” or “you won’t be able to reactivate your account” (p. 55). The EDPB even criticises interfaces that preselect deactivation or pause options over delete options, considering that “[t]he default selection of the pause option is likely to nudge users to select it instead of deleting their account as initially intended. Therefore, the practice described in this example can be considered as a breach of Article 12 (2) GDPR since it does not, in this case, facilitate the exercise of the right to erasure, and even tries to nudge users away from exercising it” (p. 56). This, combined with the EDPB’s aversion to confirmation requests (see section 5 below), suggests that the EDPB is ignoring the risk that a data subject might opt for deletion without fully recognizing the consequences, i.e., loss of access to the deleted data.

The EDPB’s approach suggests that any effort to woo users into giving more data or leaving data with the organisation will be viewed as harmful by data protection authorities. Yet data protection rules are there to prevent abuse and protect data subjects, not to render all marketing techniques illegal.

In this context, the guidelines should in our opinion be viewed as an invitation to re-examine marketing techniques to ensure that they are not too pushy – in the sense that users would in effect truly be pushed into a decision regarding personal data that they would not otherwise have made. Marketing techniques are not per se unlawful under the GDPR but may run afoul of GDPR requirements in situations where data subjects are misled or robbed of their choice.

Other key lessons for marketers and user interface designers

  • Avoid continuous prompting: One of the issues regularly highlighted by the EDPB is “continuous prompting”, i.e., prompts that appear again and again during a user’s experience on a platform. The EDPB suggests that this creates fatigue, leading the user to “give in,” i.e., by “accepting to provide more data or to consent to another processing, as they are wearied from having to express a choice each time they use the platform” (p. 14). Examples given by the EDPB include the SMS two-factor authentication popup mentioned above, as well as “import your contacts” functionality. Outside of social media platforms, the main example for most organisations is their cookie policy (so this position by the EDPB reinforces the need to manage cookie banners properly). In addition, newsletter popups and popups about “how to get our new report for free by filling out this form” are frequent on many digital properties. While popups can be effective ways to get more subscribers or more data, the EDPB guidance suggests that regulators will consider such practices questionable from a data protection perspective.
  • Ensure consistency or a justification for confirmation steps: The EDPB highlights the “longer than necessary” dark pattern at several places in its guidelines (in particular pp. 18, 52, & 57), with illustrations of confirmation pop-ups that appear before a user is allowed to select a more privacy-friendly option (and while no such confirmation is requested for more privacy-intrusive options). Such practices are unlawful according to the EDPB. This does not mean that confirmation pop-ups are always unlawful – just that you need to have a good justification for using them where you do.
  • Have a good reason for preselecting less privacy-friendly options: Because the GDPR requires not only data protection by design but also data protection by default, make sure that you are able to justify an interface in which a more privacy-intrusive option is selected by default – or better yet, don’t make any preselection. The EDPB calls preselection of privacy-intrusive options “deceptive snugness” (“Because of the default effect which nudges individuals to keep a pre-selected option, users are unlikely to change these even if given the possibility” p. 19).
  • Make all privacy settings available in all platforms: If a user is asked to make a choice during registration or upon his/her first visit (e.g., for cookies, newsletters, sharing preferences, etc.), ensure that those settings can all be found easily later on, from a central privacy settings page if possible, and alongside all data protection tools (such as tools for exercising a data subject’s right to access his/her data, to modify data, to delete an account, etc.). Also make sure that all such functionality is available not only on a desktop interface but also for mobile devices and across all applications. The EDPB illustrates this point by criticising the case where an organisation has a messaging app that does not include the same privacy statement and data subject request tools as the main app (p. 27).
  • Be clearer in using general language such as “Your data might be used to improve our services”: It is common in most privacy statements to include a statement that personal data (e.g., customer feedback) “can” or “may be used” to improve an organisation’s products and services. According to the EDPB, the word “services” is likely to be “too general” to be viewed as “clear,” and it is “unclear how data will be processed for the improvement of services.” The use of the conditional tense in the example (“might”) also “leaves users unsure whether their data will be used for the processing or not” (p. 25). Given that the EDPB’s stance in this respect is a confirmation of a position taken by EU regulators in previous guidance on transparency, and serves as a reminder to tell data subjects how data will be used.
  • Ensure linguistic consistency: If your website or app is available in more than one language, ensure that all data protection notices and tools are available in those languages as well and that the language choice made on the main interface is automatically taken into account on the data-related pages (pp. 25-26).

Best practices according to the EDPB

Finally, the EDPB highlights some other “best practices” throughout its guidelines. We have combined them below for easier review:

  • Structure and ease of access:
    • Shortcuts: Links to information, actions, or settings that can be of practical help to users to manage their data and data protection settings should be available wherever they relate to information or experience (e.g., links redirecting to the relevant parts of the privacy policy; in the case of a data breach communication to users, to provide users with a link to reset their password).
    • Data protection directory: For easy navigation through the different section of the menu, provide users with an easily accessible page from where all data protection-related actions and information are accessible. This page could be found in the organisation’s main navigation menu, the user account, through the privacy policy, etc.
    • Privacy Policy Overview: At the start/top of the privacy policy, include a collapsible table of contents with headings and sub-headings that shows the different passages the privacy notice contains. Clearly identified sections allow users to quickly identify and jump to the section they are looking for.
    • Sticky navigation: While consulting a page related to data protection, the table of contents could be constantly displayed on the screen allowing users to quickly navigate to relevant content thanks to anchor links.
  • Transparency:
    • Organisation contact information: The organisation’s contact address for addressing data protection requests should be clearly stated in the privacy policy. It should be present in a section where users can expect to find it, such as a section on the identity of the data controller, a rights related section, or a contact section.
    • Reaching the supervisory authority: Stating the specific identity of the EU supervisory authority and including a link to its website or the specific website page for lodging a complaint is another EDPB recommendation. This information should be present in a section where users can expect to find it, such as a rights-related section.
    • Change spotting and comparison: When changes are made to the privacy notice, make previous versions accessible with the date of release and highlight any changes.
  • Terminology & explanations:
    • Coherent wording: Across the website, the same wording and definition is used for the same data protection concepts. The wording used in the privacy policy should match that used on the rest of the platform.
    • Providing definitions: When using unfamiliar or technical words or jargon, providing a definition in plain language will help users understand the information provided to them. The definition can be given directly in the text when users hover over the word and/or be made available in a glossary.
    • Explaining consequences: When users want to activate or deactivate a data protection control, or give or withdraw their consent, inform them in a neutral way of the consequences of such action.
    • Use of examples: In addition to providing mandatory information that clearly and precisely states the purpose of processing, offering specific data processing examples can make the processing more tangible for users
  • Contrasting Data Protection Elements: Making data protection-related elements or actions visually striking in an interface that is not directly dedicated to the matter helps readability. For example, when posting a public message on the platform, controls for geolocation should be directly available and clearly visible.
  • Data Protection Onboarding: Just after the creation of an account, include data protection points within the onboarding experience for users to discover and set their preferences seamlessly. This can be done by, for example, inviting them to set their data protection preferences after adding their first friend or sharing their first post.
  • Notifications (including data breach notifications): Notifications can be used to raise awareness of users of aspects, changes, or risks related to personal data processing (e.g., when a data breach occurs). These notifications can be implemented in several ways, such as through inbox messages, pop-in windows, fixed banners at the top of the webpage, etc.

Next steps and international perspectives

These guidelines (available online) are subject to public consultation until 2 May 2022, so it is possible they will be modified as a result of the consultation and, we hope, improved to reflect a more pragmatic view of data protection that balances data subjects’ rights, security, and operational business needs. If you wish to contribute to the public consultation, note that the EDPB publishes feedback it receives (as a result, we have occasionally submitted feedback on behalf of clients wishing to remain anonymous).

Irrespective of the outcome of the public consultation, the guidelines are guaranteed to have an influence on the approach of EU data protection authorities in their investigations. From this perspective, it is better to be forewarned – and to have legal arguments at your disposal if you wish to adopt an approach that deviates from the EDPB’s position.

Moreover, these guidelines come at a time when the United States Federal Trade Commission (FTC) is also concerned with dark patterns. The FTC recently published an enforcement policy statement on the matter in October 2021. Dark patterns are also being discussed at the Organisation for Economic Cooperation and Development (OECD). International dialogue can be helpful if conversations about desired policy also consider practical solutions that can be implemented by businesses and reflect a desirable user experience for data subjects.

Organisations should consider evaluating their own techniques to encourage users to go one way or another and document the justification for their approach.

© 2022 Keller and Heckman LLP

Google to Launch Google Analytics 4 in an Attempt to Address EU Privacy Concerns

On March 16, 2022, Google announced the launch of its new analytics solution, “Google Analytics 4.” Google Analytics 4 aims, among other things, to address recent developments in the EU regarding the use of analytics cookies and data transfers resulting from such use.

Background

On August 17, 2020, the non-governmental organization None of Your Business (“NOYB”) filed 101 identical complaints with 30 European Economic Area data protection authorities (“DPAs”) regarding the use of Google Analytics by various companies. The complaints focused on whether the transfer of EU personal data to Google in the U.S. through the use of cookies is permitted under the EU General Data Protection Regulation (“GDPR”), following the Schrems II judgment of the Court of Justice of the European Union. Following these complaints, the French and Austrian DPAs ruled that the transfer of EU personal data from the EU to the U.S. through the use of the Google Analytics cookie is unlawful.

Google’s New Solution

According to Google’s press release, Google Analytics 4 “is designed with privacy at its core to provide a better experience for both our customers and their users. It helps businesses meet evolving needs and user expectations, with more comprehensive and granular controls for data collection and usage.”

The most impactful change from an EU privacy standpoint is that Google Analytics 4 will no longer store IP address, thereby limiting the data transfers resulting from the use of Google Analytics that were under scrutiny in the EU following the Schrems II ruling. It remains to be seen whether this change will ease EU DPAs’ concerns about Google Analytics’ compliance with the GDPR.

Google’s previous analytics solution, Universal Analytics, will no longer be available beginning July 2023. In the meantime, companies are encouraged to transition to Google Analytics 4.

Read Google’s press release.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Chinese APT41 Attacking State Networks

Although we are receiving frequent alerts from CISA and the FBI about the potential for increased cyber threats coming out of Russia, China continues its cyber threat activity through APT41, which has been linked to China’s Ministry of State Security. According to Mandiant, APT41 has launched a “deliberate campaign targeting U.S. state governments” and has successfully attacked at least six state government networks by exploiting various vulnerabilities, including Log4j.

According to Mandiant, although the Chinese-based hackers are kicked out of state government networks, they repeat the attack weeks later and keep trying to get in to the same networks via different vulnerabilities (a “re-compromise”). One such successful vulnerability that was utilized is the USAHerds zero-day vulnerability, which is a software that state agriculture agencies use to monitor livestock. When the intruders are successful in using the USAHerds vulnerability to get in to the network, they can then leverage the intrusion to migrate to other parts of the network to access and steal information, including personal information.

Mandiant’s outlook on these attacks is sobering:

“APT41’s recent activity against U.S. state governments consists of significant new capabilities, from new attack vectors to post-compromise tools and techniques. APT41 can quickly adapt their initial access techniques by re-compromising an environment through a different vector, or by rapidly operationalizing a fresh vulnerability. The group also demonstrates a willingness to retool and deploy capabilities through new attack vectors as opposed to holding onto them for future use. APT41 exploiting Log4J in close proximity to the USAHerds campaign showed the group’s flexibility to continue targeting U.S state governments through both cultivated and co-opted attack vectors. Through all the new, some things remain unchanged: APT41 continues to be undeterred by the U.S. Department of Justice (DOJ) indictment in September 2020.

Both Russia and China continue to conduct cyber-attacks against both private and public networks in the U.S. and there is no indication that the attacks will subside anytime soon.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Fitness App Agrees to Pay $56 Million to Settle Class Action Alleging Dark Pattern Practices

On February 14, 2022, Noom Inc., a popular weight loss and fitness app, agreed to pay $56 million, and provide an additional $6 million in subscription credits to settle a putative class action in New York federal court. The class is seeking conditional certification and has urged the court to preliminarily approve the settlement.

The suit was filed in May 2020 when a group of Noom users alleged that Noom “actively misrepresents and/or fails to accurately disclose the true characteristics of its trial period, its automatic enrollment policy, and the actual steps customer need to follow in attempting to cancel a 14-day trial and avoid automatic enrollment.” More specifically, users alleged that Noom engaged in an unlawful auto-renewal subscription business model by luring customers in with the opportunity to “try” its programs, then imposing significant barriers to the cancellation process (e.g., only allowing customers to cancel their subscriptions through their virtual coach), resulting in the customers paying a nonrefundable advance lump-sum payment for up to eight (8) months at a time. According to the proposed settlement, Noom will have to substantially enhance its auto-renewal disclosures, as well as require customers to take a separate action (e.g., check box or digital signature) to accept auto-renewal, and provide customers a button on the customer’s account page for easier cancellation.

Regulators at the federal and state level have recently made clear their focus on enforcement actions against “dark patterns.” We previously summarized the FTC’s enforcement policy statement from October 2021 warning companies against using dark patterns that trick consumers into subscription services. More recently, several state attorneys general (e.g., in Indiana, Texas, the District of Columbia, and Washington State) made announcements regarding their commitment to ramp up enforcement work on “dark patterns” that are used to ascertain consumers’ location data.

Article By: Privacy and Cybersecurity Practice Group at Hunton Andrews Kurth

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

FBI and DHS Warn of Russian Cyberattacks Against Critical Infrastructure

U.S. officials this week warned government agencies, cybersecurity personnel, and operators of critical infrastructure that Russia might launch cyber-attacks against Ukrainian and U.S. networks at the same time it launches its military offensive against Ukraine.

The FBI and the Department of Homeland Security (DHS) warned law enforcement, military personnel, and operators of critical infrastructure to be vigilant in searching for Russian activity on their networks and to report any suspicious activity, as they are seeing an increase in Russian scanning of U.S. networks. U.S. officials are also seeing increased disinformation and misinformation generated by Russia about Ukraine.

The FBI and DHS urged timely patching of systems and reporting of any Russian activity on networks, so U.S. officials can assess the threat, assist with a response, and prevent further activity.

For more information on cyber incident reporting, click here.

Even though a war may be starting halfway across the world, Russia’s cyber capabilities are global. Russia has the capability to bring us all into its war by attacking U.S. government agencies and companies. We are all an important part of preventing attacks and assisting others from becoming a victim of Russia’s attacks. Closely watch your network for any suspicious activity and report it, no matter how small you think it is.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Texas AG Sues Meta Over Collection and Use of Biometric Data

On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.

The suit focuses on Meta’s “tag suggestions” feature, which the company has since retired. The feature scanned faces in users’ photos and videos to suggest “tagging” (i.e., identify by name) users who appeared in the photos and videos. In the complaint, Attorney General Ken Paxton alleged that Meta,  collected and analyzed individuals’ facial geometry data (which constitutes biometric data under CUBI) without their consent, shared the data with third parties, and failed to destroy the data in a timely matter, all in violation of CUBI and the DTPA. CUBI regulates the collection and use of biometric data for commercial purposes, and the DTPA prohibits false, misleading, or deceptive acts or practices in the conduct of any trade or commerce.

Among other forms of relief, the complaint seeks an injunction enjoining Meta from violating these laws, a $25,000 civil penalty for each violation of CUBI, and a $10,000 civil penalty for each violation of the DTPA. The suit follows Facebook’s $650 million class-action settlement over alleged violations of Illinois’ Biometric Privacy Act and the company’s discontinuance of the tag suggestions feature last year.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

As the California Attorney General Focuses on Loyalty Programs, What Do Companies Need to Remember?

The California attorney general (AG) celebrated data privacy day by doing an “investigative sweep” of the loyalty programs of retailers, supermarkets, home improvement stores, travel companies, and food service companies, and sending out notices of non-compliance to businesses that the AG’s office believes might not be fully compliant with the CCPA. As the AG focuses its attention on loyalty programs, the following provides a reminder of the requirements under the CCPA.

What is a loyalty program?

Loyalty programs are structured in a variety of different ways. Some programs track dollars spent by consumers; others track products purchased. Some programs are free to participate in; others require consumers to purchase membership. Some programs offer consumers additional products; other programs offer prizes, money, or products from third parties. Although neither the CCPA nor the regulations implementing the CCPA define a “loyalty program,” as a practical matter most, if not all, loyalty programs have two things in common: (1) they collect information about consumers, and (2) they provide some form of reward in recognition of (or in exchange for) repeat purchasing patterns.[1]

What are the general obligations under the CCPA?

Because loyalty programs collect personal information about their members, if a business that sponsors a loyalty program is itself subject to the CCPA, then its loyalty program will also be subject to the CCPA. In situations in which the CCPA applies to a loyalty program, the following table generally describes the rights conferred upon a consumer in relation to the program:

Right Applicability to Loyalty Program
Notice at collection A loyalty program that collects personal information from its members should provide a notice at the point where information is being collected regarding the categories of personal information that will be collected and how that information will be used.[2]
Privacy notice A loyalty program that collects personal information of its members should make a privacy notice available to its members.[3]
Access to information A member of a loyalty program may request that a business disclose the “specific pieces of personal information” collected about them.[5]
Deletion of information A member of a loyalty program may request that a business delete the personal information collected about them. That said, a company may be able to deny a request by a loyalty program member to delete information in their account based upon one of the exceptions to the right to be forgotten.
Opt-out of sale A loyalty program that sells the personal information of its members should include a “do not sell” link on its homepage and permit consumers to opt-out of the sale of their information. To the extent that a consumer has directed the loyalty program to disclose their information to a third party (e.g., a fulfillment partner) it would not be considered a “sale” of information.
Notice of financial incentive To the extent that a loyalty program qualifies as a “financial incentive” under the regulations implementing the CCPA (discussed below), a business should provide a “notice of financial incentive.”[4]

Are loyalty programs always financial incentive programs?

Whether a loyalty program constitutes a “financial incentive” program as that term is defined by the regulations implementing the CCPA depends on the extent to which the loyalty program’s benefits “relate to” the collection, retention, or sale of personal information.”[6] While the California Attorney General has implied that all loyalty programs “however defined, should receive the same treatment as other financial incentives,” a strong argument may exist that for many loyalty programs the benefits provided are directly related to consumer purchasing patterns (i.e., repeat or volume purchases) and are not “related” to the collection of personal information.[7] If a particular loyalty program qualifies as a financial incentive program, a business should consider the following steps (in addition to the compliance obligations identified above):

  • Notify the consumer of the financial incentive.[8] The regulations implementing the CCPA specify that the financial incentive notice should contain the following information:
    • A summary of the financial incentive offered.[11] In the context of a loyalty program a description of the benefits that the consumer will receive as part of the program would likely provide a sufficient summary of the financial incentive.
    • A description of the material terms of the financial incentive. [12] The regulation specifies that the description should include the categories of personal information that are implicated by the financial incentive program and the “value of the consumer’s data.”[13]
    • How the consumer can opt-in to the financial incentive.[14] Information about how a consumer can opt-in (or join) a financial incentive program is typically conveyed when a consumer reviews an application to join or sign-up with the program.
    • How the consumer can opt-out, or withdraw, from the program. [15] This is an explanation as to how the consumer can invoke their right to withdraw from the program.[16]
    • An explanation of how the financial incentive is “reasonably related” to the value of the consumer’s data.[17] While the regulations state that a notice of financial incentive should provide an explanation as to how the financial incentive “reasonably relates” to the value of the consumer’s data, the CCPA requires only that a reasonable relationship exists if a business intends to discriminate against a consumer “because the consumer exercised any of the consumer’s rights” under the Act.[18] Where a business does not intend to use its loyalty program to discriminate against consumers that exercise CCPA-conferred privacy rights, it’s not clear whether this requirement applies. In the event that a reasonable relationship must be shown, however, the regulations require that a company provide a “good-faith estimate of the value of the consumer’s data that forms the basis” for the financial incentive and that the business provide a “description of the method” used to calculate that value.[19]
  • Obtain the consumer’s “opt in consent” to the “material terms” of the financial incentive,[9] and
  • Permit the consumer to revoke their consent “at any time.”[10]

FOOTNOTES

[1] FSOR Appendix A at 273 (Response 814) (including recognition from the AG that “loyalty programs” are not defined under the CCPA, and declining invitations to provide a definition through regulation).

[2] Cal. Civ. Code § 1798.100(a) (West 2021); Cal. Code Regs. tit. 11, 999.304(b), 305(a)(1) (2021).

[3] Cal. Code Regs. tit. 11, 999.304(a) (2021).

[5] Cal. Civ. Code § 1798.100(a).

[4] CAL. CODE REGS. tit. 11, 999.301(n); 304(d); 307(a), (b).

[6] CAL. CODE REGS. tit. 11, 999.301(j) (2021).

[7] FSOR Appendix A at 75 (Response 254).

[8] Cal. Civ. Code § 1798.125(b)(2) (West 2021).

[11] CAL. CODE REGS. tit. 11, 999.307(b)(1) (2021).

[12] CAL. CODE REGS. tit. 11, 999.307(b)(2) (2021).

[13] CAL. CODE REGS. tit. 11, 999.307(b)(2) (2021).

[14] CAL. CODE REGS. tit. 11, 999.307(b)(3) (2021).

[15] CAL. CODE REGS. tit. 11, 999.307(b)(4) (2021).

[16] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

[17] CAL. CODE REGS. tit. 11, 999.307(b)(5) (2021).

[18] Cal. Civ. Code § 1798.125(a)(1), (2) (West 2021).

[19] CAL. CODE REGS. tit. 11, 999.307(b)(5)(a), (b) (2021).

[9] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

[10] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

©2022 Greenberg Traurig, LLP. All rights reserved.
For more articles about data privacy, visit the NLR Cybersecurity, Media & FCC section.

New Poll Underscores Growing Support for National Data Privacy Legislation

Over half of all Americans would support a federal data privacy law, according to a recent poll from Politico and Morning Consult. The poll found that 56 percent of registered voters would either strongly or somewhat support a proposal to “make it illegal for social media companies to use personal data to recommend content via algorithms.” Democrats were most likely to support the proposal at 62 percent, compared to 54 percent of Republicans and 50 percent of Independents. Still, the numbers may show that bipartisan action is possible.

The poll is indicative of American’s increasing data privacy awareness and concerns. Colorado, Virginia, and California all passed or updated data privacy laws within the last year, and nearly every state is considering similar legislation. Additionally, Congress held several high-profile hearings last year soliciting testimony from several tech industry leaders and whistleblower Frances Haugen. In the private sector, Meta CEO Mark Zuckerberg has come out in favor of a national data privacy standard similar to the EU’s General Data Protection Regulation (GDPR).

Politico and Morning Consult released the poll results days after Senator Ron Wyden (D-OR) accepted a 24,000-signature petition calling for Congress to pass a federal data protection law. Senator Wyden, who recently introduced his own data privacy proposal called the “Mind Your Own Business Act,” said it was “past time” for Congress to act.

He may be right: U.S./EU data flows have been on borrowed time since 2020. The GDPR prohibits data flows from the EU to countries with inadequate data protection laws, including the United States. The U.S. Privacy Shield regulations allowed the United States to circumvent the rule, but an EU court invalidated the agreement in 2020, and data flows between the US and the EU have been in legal limbo ever since. Eventually, Congress and the EU will need to address the situation and a federal data protection law would be a long-term solution.

This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law. Click here to read more about the Data Privacy and Cybersecurity practice at Robinson & Cole LLP.

For more data privacy and cybersecurity news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

BREAKING: Seventh Circuit Certifies BIPA Accrual Question to Illinois Supreme Court in White Castle

Yesterday the Seventh Circuit issued a much awaited ruling in the Cothron v. White Castle litigation, punting to the Illinois Supreme Court on the pivotal question of when a claim under the Illinois Biometric Privacy Act (“BIPA”) accrues.  No. 20-3202 (7th Cir.).  Read on to learn more and what it may mean for other biometric and data privacy litigations.

First, a brief recap of the facts of the dispute.  After Plaintiff started working at a White Castle in Illinois in 2004, White Castle began using an optional, consent-based finger-scan system for employees to sign documents and access their paystubs and computers.  Plaintiff consented in 2007 to the collection of her biometric data and then 11 years later—in 2018—filed suit against White Castle for purported violation of BIPA.

Plaintiff alleged that White Castle did not obtain consent to collect or disclose her fingerprints at the first instance the collection occurred under BIPA because BIPA did not exist in 2007.  Plaintiff asserted that she was “required” to scan her finger each time she accessed her work computer and weekly paystubs with White Castle and that her prior consent to the collection of biometric data did not satisfy BIPA’s requirements.  According to Plaintiff, White Castle violated BIPA Sections 15(b) and 15(d) by collecting, then “systematically and automatically” disclosing her biometric information without adhering to BIPA’s requirements (she claimed she did not consent under BIPA to the collection of her information until 2018). She sought statutory damages for “each” violation on behalf of herself and a putative class.

White Castle before the district court had moved to dismiss the Complaint and for judgment on the pleadings—both of which motions were denied.  The district court sided with Plaintiff, holding that “[o]n the facts set forth in the pleadings, White Castle violated Section 15(b) when it first scanned [Plaintiff’s] fingerprint and violated Section 15(d) when it first disclosed her biometric information to a third party.”  The district court also held that under Section 20 of BIPA, Plaintiff could recover for “each violation.”  The court rejected White Castle’s argument that this was an absurd interpretation of the statute not in keeping with legislative intent, commenting that “[i]f the Illinois legislature agrees that this reading of BIPA is absurd, it is of course free to modify the statue” but “it is not the role of a court—particularly a federal court—to rewrite a state statute to avoid a construction that may penalize violations severely.”

White Castle filed an appeal of the district court’s ruling with the Seventh Circuit.  As presented by White Castle, the issue before the Seventh Circuit was “[w]hether, when conduct that allegedly violates BIPA is repeated, that conduct gives rise to a single claim under Sections 15(b) and 15(d) of BIPA, or multiple claims.”

In ruling yesterday this issue was appropriate for the Illinois Supreme Court, the Seventh Circuit held that “[w]hether a claim accrues only once or repeatedly is an important and recurring question of Illinois law implicating state accrual principles as applied to this novel state statute.  It requires authoritative guidance that only the state’s highest court can provide.”  Here, the accrual issue is dispositive for purposes of Plaintiffs’ BIPA claim.  As the Seventh Circuit recognized, “[t]he timeliness of the suit depends on whether a claim under the Act accrued each time [Plaintiff] scanned her fingerprint to access a work computer or just the first time.”

Interestingly, the Seventh Circuit drew a comparison to data privacy litigations outside the context of BIPA, stating that the parties’ “disagreement, framed differently, is whether the Act should be treated like a junk-fax statute for which a claim accrues for each unsolicited fax, [], or instead like certain privacy and reputational torts that accrue only at the initial publication of defamatory material.”

Several BIPA litigations have been stayed pending a ruling from the Seventh Circuit in White Castle and these cases will remain on pause going into 2022 pending a ruling from the Illinois Supreme Court.  While some had hoped for clarity on this area of BIPA jurisprudence by the end of the year, the Seventh Circuit’s ruling means that this litigation will remain a must-watch privacy case going forward.

Article By Kristin L. Bryan of Squire Patton Boggs (US) LLP

For more data privacy and cybersecurity legal news, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP