Throwing Out the Privacy Policy is a Bad Idea

The public internet has been around for about thirty years and consumers’ browser-based graphic-heavy experience has existed for about twenty-five years. In the early days, commercial websites operated without privacy policies.

Eventually, people started to realize that they were leaving trails of information online, and in the early ‘aughts the methods for business capturing and profiting from these trails became clear, although the actual uses of the data on individual sites was not clear. People asked for greater transparency from the sites they visited online, and in response received the privacy policy.

A deeply-flawed instrument, the website privacy policy purports to explain how information is gathered and used by a website owner, but most such policies are strangely both imprecise and too long, losing the average reader in a fog of legalese language and marginally relevant facts. Some privacy policies are intentionally obtuse because it doesn’t profit the website operator to make its methods obvious. Many are overly general, in part because the website company doesn’t want to change its policy every time it shifts business practices or vendor alliances. Many are just messy and poorly written.

Part of the reason that privacy policies are confusing is that data privacy is not a precise concept. The definition of data is context dependent. Data can mean the information about a transaction, information gathered from your browser visit (include where you were before and after the visit), information about you or your equipment, or even information derived by analysis of the other information. And we know that de-identified data can be re-identified in many cases, and that even a collection a generic data can lead to one of many ways to identify a person.

The definition of data is context dependent.

The definition of privacy is also untidy. An ecommerce company must capture certain information to fulfill an online order. In this era of connected objects, the company may continue to take information from the item while the consumer is using it. This is true for equipment from televisions to dishwashers to sex toys. The company likely uses this information internally to develop its products. It may use the data to market more goods or services to the consumer. It may transfer the information to other companies so they can market their products more effectively. The company may provide the information to the government. This week’s New Yorker devotes several pages to how the word “privacy” conflates major concepts in US law, including secrecy and autonomy,1 and is thus confusing to courts and public alike.

All of this is difficult to reflect in a privacy policy, even if the company has incentive to provide useful information to its customers.

Last month the Washington Post ran an article by Geoffrey Fowler that was subtitled “Let’s abolish reading privacy policies.” The article notes a 2019 Pew survey claiming that only 9 percent of Americans say they always read privacy policies. I would suggest that more than half of those Americans are lying. Almost no one always reads privacy policies upon first entering a website or downloading an app. That’s not even really what privacy policies are for.

Fowler shows why people do not read these policies. He writes, “As an experiment, I tallied up all of the privacy policies just for the apps on my phone. It totaled nearly 1 million words. “War and Peace” is about half as long. And that’s just my phone. Back in 2008, Lorrie Cranor, a professor of engineering and public policy at Carnegie Mellon University, and a colleague estimated that reading and consenting to all the privacy policies on websites Americans visit would take 244 hours per year.”

The length, complexity and opacity of online privacy policies are concerning. The best alleviation for this concern would not be to eliminate privacy policies, but to make them less instrumental in the most important decisions about descriptive data.

Limit companies’ use of data and we won’t need to fight through their privacy options.

Website owners should not be expected to write out privacy policies that are both sufficiently detailed and succinctly readable so that consumers can make meaningful choices about use of the data that describes them. This type of system forces a person to be responsible for her own data protection and takes the onus off of the company to limit its use of the data. It is like our current system of waste recycling – both ineffective and supported by polluters, because rather than forcing manufacturers to use more environmentally friendly packaging, it pushes consumers to deal with the problem at home, shifting the burden from industry to us.  Similarly, if the legislatures provided a set of simple rules for website operators – here is what you are allowed to do with personal data, and here is what you are not allowed to do with it – then no one would read privacy policies to make sure data about our transactions was spared the worst treatment. The worst treatment would be illegal.

State laws are moving in this direction, providing simpler rules restricting certain uses and transfers of personal data and sensitive data. We are early in the process, but if the trend continues regarding omnibus state privacy laws in the same manner that all states eventually passed data breach disclosure laws, then we can be optimistic and expect full coverage of online privacy rules for all Americans within a decade or so. But we shouldn’t need to wait for all states to comply.

Unlike the data breach disclosure laws which encourage companies to comply only with the laws relevant to their particular loss of data, omnibus privacy laws affect the way companies conduct the normal course of everyday business, so it will only take requirements in a few states before big companies start building their privacy rights recognition functions around the lowest common denominator. It will simply make economic sense for businesses to give every US customer the same rights as most protective state provides its residents. Why build 50 sets of rules when you don’t need to do so? The cost savings of maintaining only one privacy rights-recognition system will offset the cost of providing privacy rights to people in states who haven’t passed omnibus laws yet.

This won’t make privacy policies any easier to read, but it will become less important to read them. Then privacy policies can return to their core function, providing a record of how a company treats data. In other words, a reference document, rather than a set of choices inset into a pillow of legal terms.

We shouldn’t eliminate the privacy policy. We should reduce the importance of such polices, and limit their functions, reducing customer frustration with the privacy policy’s role in our current process. Limit companies’ use of data and we won’t need to fight through their privacy options.


ENDNOTES

1 Privacy law also conflates these meanings with obscurity in a crowd or in public.


Article By Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.

Heated Debate Surrounds Proposed Federal Privacy Legislation

As we previously reported on the CPW blog, the leadership of the House Energy and Commerce Committee and the Ranking Member of the Senate Commerce Committee released a discussion draft of proposed federal privacy legislation, the American Data Privacy and Protection Act (“ADPPA”), on June 3, 2022. Signaling potential differences amongst key members of the Senate Committee on Commerce, Science, and Transportation, Chair Maria Cantwell (D-WA) withheld her support. Staking out her own position, Cantwell is reportedly floating an updated version of the Consumer Online Privacy Rights Act (“COPRA”), originally proposed in 2019.

Early Stakeholder Disagreement

As soon as a discussion draft of the ADPPA was published, privacy rights organizations, civil liberty groups, and businesses entered the fray, drawing up sides for and against the bill. The ACLU came out as an early critic of the legislation. In an open letter to Congress sent June 10, the group urged caution, arguing that both the ADPPA and COPRA contain “very problematic provisions.” According to the group, more time is required to develop truly meaningful privacy legislation, as evidenced by “ACLU state affiliates who have been unable to stop harmful or effectively useless state privacy bills from being pushed quickly to enactment with enormous lobbying and advertising support of sectors of the technology industry that resist changing a business model that depends on consumers not having protections against privacy invasions and discrimination.” To avoid this fate, the ACLU urges Congress to “bolster enforcement provisions, including providing a strong private right of action, and allow the states to continue to respond to new technologies and new privacy challenges with state privacy laws.”

On June 13, a trio of trade groups representing some of the largest tech companies sent their open letter to Congress, supporting passage of a federal privacy law, but ultimately opposing the ADPPA. Contrary to the position taken by the ACLU, the industry groups worry that the bill’s inclusion of a private right of action with the potential to recover attorneys’ fees will lead to litigation abuse. The groups took issue with other provisions as well, such as the legislation’s restrictions on the use of data derived from publicly-available sources and the “duty of loyalty” to individuals whose covered data is processed.

Industry groups and consumer protection organizations had the opportunity to voice their opinions regarding the ADPPA in a public hearing on June 14. Video of the proceedings and prepared testimony of the witnesses are available here. Two common themes arose in the witnesses’ testimony: (1) general support for federal privacy legislation; and (2) opposition to discrete aspects of the bill. As has been the case for the better part of a decade in which Congress has sought to draft a federal privacy bill, two fundamental issues continue to drive the debate and must be resolved in order for the legislation to become law: the private right of action to enforce the law and preemption of state laws or portions of them. . While civil rights and privacy advocacy groups maintain that the private right of action does not go far enough and that federal privacy legislation should not preempt state law, industry groups argue that a private right of action should not be permitted and that state privacy laws should be broadly preempted.

The Path Forward

The Subcommittee on Consumer Protection and Commerce of the House Energy and Commerce Committee is expected to mark up the draft bill the week of June 20. We expect the subcommittee to approve the draft bill with little or no changes. The full Energy and Commerce Committee should complete work on the bill before the August recess. Given the broad bipartisan support for the legislation in the House, we anticipate that the legislation, with minor tweaks, is likely to be approved by the House, setting up a showdown with the Senate after a decade of debate.

With the legislative session rapidly drawing to a close, the prospects for the ADPPA’s passage remain unclear. Intense disagreement remains amongst key constituency groups regarding important aspects of the proposed legislation. Yet, in spite of the differences, a review of the public comments to date regarding the ADPPA reveal one nearly unanimous opinion: the United States needs federal privacy legislation. In light of the fact that most interested parties agree that the U.S. would benefit from federal privacy legislation, Congress has more incentive than ever to reach compromise regarding one of the proposed privacy bills.

© Copyright 2022 Squire Patton Boggs (US) LLP

Small Businesses Don’t Recognize Risk of Cyberattack Despite Repeated Warnings

CNBC surveys over 2,000 small businesses each quarter to get their thoughts on the overall business environment and their small business’ health. According to the latest CNBC/SurveyMonkey Small Business Survey, despite repeated warnings by the Cybersecurity and Infrastructure Security Agency and the FBI that U.S.- based businesses are at an increased risk of a cyber-attack following Russia’s invasion of Ukraine, small business owners do not believe that it is an actual risk that will affect them, and they are not prepared for an attack. The latest survey shows that only five percent of small business owners reported cybersecurity to be the biggest risk to their company.

What is unfortunate, but not surprising, is the fact that this is the same percentage of small business owners who recognized a cyber attack as the biggest risk a year ago. There has been no change in the perception among business owners, even though there are repeated, dire warnings from the government. Also unfortunate is the statistic that only 33 percent of business owners with one to four employees are concerned about a cyber attack this year. In contrast, 61 percent of business owners with more than 50 employees have the same concern.

According to CNBC, “this general lack of concern among small business owners diverges from the sentiment among the general public….In SurveyMonkey’s polling, 55% of people in the U.S. say they would be less likely to continue to do business with brands who are victims of a cyber attack.” CNBC’s conclusion is that there is a disconnect between business owners’ appreciation of how much customers care about data security and that “[s]mall businesses that fail to take the cyber threat seriously risk losing customers, or much more, if a real threat emerges.” Statistics show that threat actors are targeting small to medium-sized businesses to stay under the law enforcement radar. With such a large target on their backs, business owners may wish to make cybersecurity a priority. It’s important to keep customers.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

DOJ Limits Application of Computer Fraud and Abuse Act, Providing Clarity for Ethical Hackers and Employees Paying Bills at Work Alike

On May 19, 2022, the Department of Justice announced it would not charge good-faith hackers who expose weaknesses in computer systems with violating the Computer Fraud and Abuse Act (CFAA or Act), 18 U.S.C. § 1030. Congress enacted the CFAA in 1986 to promote computer privacy and cybersecurity and amended the Act several times, most recently in 2008. However, the evolving cybersecurity landscape has left courts and commentators troubled by potential applications of the CFAA to circumstances unrelated to the CFAA’s original purpose, including prosecution of so-called “white hat” hackers. The new charging policy, which became effective immediately, seeks to advance the CFAA’s original purpose by clarifying when and how federal prosecutors are authorized to bring charges under the Act.

DOJ to Decline Prosecution of Good-Faith Security Research

The new policy exempts activity of white-hat hackers and states that “the government should decline prosecution if available evidence shows the defendant’s conduct consisted of, and the defendant intended, good-faith security research.” The policy defines “good-faith security research” as “accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”

In practice, this policy appears to provide, for example, protection from federal charges for the type of ethical hacking a St. Louis Post-Dispatch reporter performed in 2021. The reporter uncovered security flaws in a Missouri state website that exposed the Social Security numbers of over 100,000 teachers and other school employees. The Missouri governor’s office initiated an investigation into the reporter’s conduct for unauthorized computer access. While the DOJ’s policy would not affect prosecutions under state law, it would preclude federal prosecution for the conduct if determined to be good-faith security research.

The new policy also promises protection from prosecution for certain arguably common but contractually prohibited online conduct, including “[e]mbellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service.” Such activities resemble the facts of Van Buren v. United States, No. 19-783, which the Supreme Court decided in June 2021. In Van Buren, the 6-3 majority rejected the government’s broad interpretation of the CFAA’s prohibition on “unauthorized access” and held that a police officer who looked up license plate information on a law-enforcement database for personal use—in violation of his employer’s policy but without circumventing any access controls—did not violate the CFAA. The DOJ did not cite Van Buren as the basis for the new policy. Nor did the DOJ identify any another impetus for the change.

To Achieve More Consistent Application of Policy, All Federal Prosecutors Must Consult with Main Justice Before Bringing CFAA Charges

In addition to exempting good-faith security research from prosecution, the new policy specifies the steps for charging violations of the CFAA. To help distinguish between actual good-faith security research and pretextual claims of such research that mask a hacker’s malintent, federal prosecutors must consult with the Computer Crime and Intellectual Property Section (CCIPS) before bringing any charges. If CCIPS recommends declining charges, prosecutors must inform the Office of the Deputy Attorney General (DAG) and may need to obtain approval from the DAG before initiating charges.

©2022 Greenberg Traurig, LLP. All rights reserved.

Navigating the Data Privacy Landscape for Autonomous and Connected Vehicles: Implementing Effective Data Security

Autonomous vehicles can be vulnerable to cyber attacks, including those with malicious intent. Identifying an appropriate framework with policies and procedures will help mitigate the risk of a potential attack.

The National Highway Traffic Safety Administration (NHTSA) recommends a layered approach to reduce the likelihood of an attack’s success and mitigate ramifications if one does occur. NHTSA’s Cybersecurity Framework is structured around the five principles of identify, protect, detect, respond and recover, and can be used as a basis for developing comprehensive data security policies.

NHTSA goes on to describe how this approach “at the vehicle level” includes:

  • Protective/Preventive Measures and Techniques: These measures, such as isolation of safety-critical control systems networks or encryption, implement hardware and software solutions that lower the likelihood of a successful hack and diminish the potential impact of a successful hack.
  • Real-time Intrusion (Hacking) Detection Measures: These measures continually monitor signatures of potential intrusions in the electronic system architecture.
  • Real-time Response Methods: These measures mitigate the potential adverse effects of a successful hack, preserving the driver’s ability to control the vehicle.
  • Assessment of Solutions: This [analysis] involves methods such as information sharing and analysis of a hack by affected parties, development of a fix, and dissemination of the fix to all relevant stakeholders (such as through an ISAC). This layer ensures that once a potential vulnerability or a hacking technique is identified, information about the issue and potential solutions are quickly shared with other stakeholders.

Other industry associations are also weighing in on best practices, including the Automotive Information Sharing and Analysis Center’s (Auto-ISAC) seven Key Cybersecurity Functions and, from a technology development perspective, SAE International’s J3061, a Cybersecurity Guidebook for Cyber-Physical Vehicle Systems to help AV companies “[minimize] the exploitation of vulnerabilities that can lead to losses, such as financial, operational, privacy, and safety.”

© 2022 Varnum LLP

Comparing and Contrasting the State Laws: Does Pseudonymized Data Exempt Organizations from Complying with Privacy Rights?

Some organizations are confused as to the impact that pseudonymization has (or does not have) on a privacy compliance program. That confusion largely stems from ambiguity concerning how the term fits into the larger scheme of modern data privacy statutes. For example, aside from the definition, the CCPA only refers to “pseudonymized” on one occasion – within the definition of “research” the CCPA implies that personal information collected by a business should be “pseudonymized and deidentified” or “deidentified and in the aggregate.”[1] The conjunctive reference to research being both pseudonymized “and” deidentified raises the question whether the CCPA lends any independent meaning to the term “pseudonymized.” Specifically, the CCPA assigns a higher threshold of anonymization to the term “deidentified.” As a result, if data is already deidentified it is not clear what additional processing or set of operations is expected to pseudonymize the data. The net result is that while the CCPA introduced the term “pseudonymization” into the American legal lexicon, it did not give it any significant legal effect or status.

Unlike the CCPA, the pseudonymization of data does impact compliance obligations under the data privacy statutes of Virginia, Colorado, and Utah. As the chart below indicates, those statutes do not require that organizations apply access or deletion rights to pseudonymized data, but do imply that other rights (e.g., opt out of sale) do apply to such data. Ambiguity remains as to what impact pseudonymized data has on rights that are not exempted, such as the right to opt out of the sale of personal information. For example, while Virginia does not require an organization to re-identify pseudonymized data, it is unclear how an organization could opt a consumer out of having their pseudonymized data sold without reidentification.


ENDNOTES

[1] Cal. Civ. Code § 1798.140(ab)(2) (West 2021). It should be noted that the reference to pseudonymizing and deidentifying personal information is found within the definition of the word “Research,” as such it is unclear whether the CCPA was attempting to indicate that personal information will not be considered research unless it has been pseudonymized and deidentified, or whether the CCPA is mandating that companies that conduct research must pseudonymize and deidentify. Given that the reference is found within the definition section of the CCPA, the former interpretation seems the most likely intent of the legislature.

[2] The GDPR does not expressly define the term “sale,” nor does it ascribe particular obligations to companies that sell personal information. Selling, however, is implicitly governed by the GDPR as any transfer of personal information from one controller to a second controller would be considered a processing activity for which a lawful purpose would be required pursuant to GDPR Article 6.

[3] Va. Code 59.1-577(B) (2022).

[4] Utah Code Ann. 13-61-303(1)(a) (2022).

[5] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[6] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[7] Utah Code Ann. 13-61-303(1)(c) (exempting compliance with Utah Code Ann. 13-61-202(1) through (3)).

[8] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[9] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[10] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[11] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[12] Utah Code Ann. 13-61-303(1)(c) (exempting compliance with Utah Code Ann. 13-61-202(1) through (3)).

[13] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-574).

[14] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-574).

©2022 Greenberg Traurig, LLP. All rights reserved.

Privacy Tip #328 – Ukraine Charity Scams

Unscrupulous criminals use crises to their advantage. Scammers are using the conflict in Ukraine to bilk money from people trying to help those impacted from the attacks. There are numerous accounts of scammers using old techniques to defraud people from funds and personal information.

We all want to help and what is unfolding in Ukraine is tragic. Fraudsters prey on our wishes to aid those in need and know that we are vulnerable to attack because of the emotional toll the war in Ukraine is taking on the world, but particularly the Ukrainians.

If you wish to support Ukraine, do so. But be wary of where you are sending your money. There are many wonderful and legitimate charities that are working hard to assist those in need. But there are others who are using our emotions to help others to steal from us. Be wary of unsolicited requests for donations through email or text. Research the charity to which you are sending your money and make sure you are on the charity’s official website. Be cautious about clicking on any links that are sent to you via text or email. If you are solicited by a well-known charity, take the time to donate directly through their official website and not through unsolicited emails.

The Ukrainians need all the resources and support they can get, so send your charitable donations to a charity that will actually get the funds to them.

According to CNBC, here is a list of top-rated charities for Ukrainian relief.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

 

Article By Linn F. Freedman of Robinson & Cole LLP

For more articles on cybersecurity, visit the NLR Communications, Media & Internet section.

The DOJ Throws Cold Water on the Frosties NFT Founders

The U.S. Attorney’s Office for the Southern District of New York recently charged two individuals for allegedly participating in a scheme to defraud purchasers of “Frosties” non-fungible tokens (or “NFTs”) out of over $1 million. The two-count complaint charges Ethan Nguyen (aka “Frostie”) and Andre Llacuna (aka “heyandre”) with conspiracy to commit wire fraud in violation of 18 U.S.C. § 1349 and conspiracy to commit money laundering in violation of 18 U.S.C. § 1956.   Each charge carries a maximum sentence of 20 years in prison.

The Defendants marketed “Frosties” as the entry point to a broader online community consisting of games, reward programs, and other benefits.  In January 2022, their “Frosties” pre-sale raised approximately $1.1 million.

In a so-called “rug pull,” Frostie and heyandre transferred the funds raised through the pre-sale to a series of separate cryptocurrency wallets, eliminated Frosties’ online presence, and took down its website.  The transaction, which was publicly recorded and viewable on the blockchain, triggered investors to sell Frosties at a considerable discount.  Frostie and heyandre then allegedly proceeded to move the funds through a series of transactions intended to obfuscate the source and increase anonymity.  The charges came as the Defendants were preparing for the March 26 pre-sale of their next NFT project, “Embers,” which law enforcement alleges would likely have followed the same course as “Frosties.”

In a public statement announcing the arrests, the DOJ explained how the emerging NFT market is a risk-laden environment that has attracted the attention of scam artists.  Representatives from each of the federal agencies that participated in the investigation cautioned the public and put other potential fraudsters on notice of the government’s watchful eye towards cryptocurrency malfeasance.

This investigation comes on the heels of the FBI’s announcement last month of the Virtual Asset Exploitation Unit, a special task force dedicated to blockchain analysis and virtual asset seizure.  The prosecution of the Defendants in this matter continues aggressive efforts by federal agencies to reign in bad actors participating in the cryptocurrency/digital assets/blockchain space.

Copyright ©2022 Nelson Mullins Riley & Scarborough LLP

French Insider Episode 12: Navigating the Metaverse with Jim Gatto [PODCAST]

Joining host Sarah Aberg is Jim Gatto. Jim joins us today to discuss the metaverse, the technology and business models involved in these virtual worlds, the role of NFTs and cryptocurrency in the digital economy, and the legal, regulatory, and governance issues that can arise when companies seek to enter that space.

Jim Gatto is a partner in Sheppard Mullin’s Washington, D.C. office, where he leads the  Blockchain & Fintech Team, Social Media & Games Team, and Open Source Team. Jim’s practice focuses on blockchain, interactive entertainment, digital art, AI, and online gambling. He advises clients on IP strategies, development and publishing agreements, licensing and technology transaction agreements, and tech regulatory issues. Jim has been involved with blockchain since 2012 and has been recognized as a thought leader by leading organizations including as a Cryptocurrency, Blockchain and Fintech Trailblazer by the National Law Journal.

Sarah Aberg is special counsel in the White Collar Defense and Corporate Investigations Group in Sheppard Mullin’s New York office. Sarah’s practice encompasses litigation, internal investigations and white collar defense.  Her areas of focus include financial services and securities, as well as corporate fraud in a variety of industries, including technology, construction, and non-profits.  Sarah’s regulatory practice encompasses market regulation, foreign registration and disclosure requirements, supervisory procedures, and sales practices.  Sarah represents corporations, financial services companies, and associated individuals in connection with investigations and regulatory matters before the U.S. Department of Justice, the Securities and Exchange Commission, the Commodity Futures Trading Commission, FINRA, the New York Stock Exchange, the New York State Department of Financial Services, and the New York Attorney General’s Office.

What We Discussed in This Episode:

  1. What is the Metaverse?
  2. How Do Metaverses Differ from Earlier Virtual Worlds?
  3. What Role Do NFTs Play in the Digital Economy?
  4. Investing in a Metaverse: What are the Risks?
  5. What are Legal, Regulatory, and Tax Considerations?
  6. What Governance Issues Exist for Brands Operating in a Metaverse?
  7. What are the Inflationary and Deflationary Aspects of the Virtual Economy?
  8. How Might Blockchain and Cryptocurrency Alter International Financial Transactions?
  9. Is the World Moving into a Virtual/Digital Economy?

WW International to Pay $1.5 Million Civil Penalty for Alleged COPPA Violations

In 2014, with childhood obesity on the rise in the United States, tech company Kurbo, Ltd. (Kurbo) marketed a free app for kids that, according to the company, was “designed to help kids and teens ages 8-17 reach a healthier weight.” When WW International (WW) (formerly Weight Watchers) acquired Kurbo in 2018, the app was rebranded “Kurbo by WW,” and WW continued to market the app to children as young as eight. But according to the Federal Trade Commission (FTC), Kurbo’s privacy practices were not exactly child-friendly, even if its app was. The FTC’s complaint, filed by the Department of Justice (DOJ) last month, claims that WW’s notice, data collection, and data retention practices violated the Children’s Online Privacy Protection Act Rule (COPPA Rule). WW and Kurbo, under a stipulated order, agreed to pay a $1.5 million civil penalty in addition to complying with a range of injunctive provisions. These provisions include, but are not limited to, deleting all personal information of children whose parents did not provide verifiable parental consent in a specified timeframe, and deleting “Affected Work Product” (defined in the order to include any models or algorithms developed in whole or in part using children’s personal information collected through the Kurbo Program).

Complaint Background

The COPPA Rule applies to any operator of a commercial website or online service directed to children that collects, uses, and/or discloses personal information from children and to any operator of a commercial website or online service that has actual knowledge that it collects, uses, and/or discloses personal information from children. Operators must notify parents and obtain their consent before collecting, using, or disclosing personal information from children under 13.

The complaint states that children enrolled in the Kurbo app by signing up through the app or having a parent do it on their behalf. Once on Kurbo, users could enter personal information such as height, weight, and age, and the app then tracked their weight, food consumption, and exercise. However, the FTC alleges that Kurbo’s age gate was porous, requiring no verification process to establish that children who affirmed they were over 13 were the age they claimed to be or that users asserting they were parents were indeed parents. In fact, the complaint alleges that the registration area featured a “tip-off” screen that gave visitors just two choices for registration: the “I’m a parent” option or the “I’m at least 13” option. Visitors saw the legend, “Per U.S. law, a child under 13 must sign up through a parent” on the registration page featuring these choices. In fact, thousands of users who indicated that they were at least 13 were younger and were able to change their information and falsify their real age. Users who lied about their age or who falsely claimed to be parents were able to continue to use the app. In 2020, after a warning from the FTC, Kurbo implemented a registration screen that removed the legend and the “at least 13” option. However, the new process failed to provide verification measures to establish that users claiming to be parents were indeed parents.

Kurbo’s notice of data collection and data retention practices also fell short. The COPPA Rule requires an operator to “post a prominent and clearly labeled link to an online notice of its information practices with regard to children on the home or landing page or screen of its Web site or online service, and, at each area of the Web site or online service where personal information is collected from children.” But beginning in November 2019, Kurbo’s notice at registration was buried in a list of hyperlinks that parents were not required to click through, and the notice failed to list all the categories of information the app collected from children. Further, Kurbo did not comply with the COPPA Rule’s mandate to keep children’s personal information only as long as reasonably necessary for the purpose it was collected and then to delete it. Instead, the company held on to personal information indefinitely unless parents specifically requested its removal.

Stipulated Order

In addition to imposing a $1.5 million civil penalty, the order, which was approved by the court on March 3, 2022, requires WW and Kurbo to:

  • Refrain from disclosing, using, or benefitting from children’s personal information collected in violation of the COPPA Rule;
  • Delete all personal information Kurbo collected in violation of the COPPA Rule within 30 days;
  • Provide a written statement to the FTC that details Kurbo’s process for providing notice and seeking verifiable parental consent;
  • Destroy all affected work product derived from improperly collecting children’s personal information and confirm to the FTC that deletion has been carried out;
  • Delete all children’s personal information collected within one year of the user’s last activity on the app; and
  • Create and follow a retention schedule that states the purpose for which children’s personal information is collected, the specific business need for retaining such information, and criteria for deletion, including a set timeframe no longer than one year.

Implications of the Order

Following the U.S. Supreme Court’s decision in AMG Capital Management, LLC v. Federal Trade Commission, which halted the FTC’s ability to use its Section 13(b) authority to seek monetary penalties for violations of the FTC Act, the FTC has been pushing Congress to grant it greater enforcement powers. In the meantime, the FTC has used other enforcement tools, including the recent resurrection of the agency’s long-dormant Penalty Offense Authority under Section 5(m)(1)(B) of the FTC Act and a renewed willingness to use algorithmic disgorgement (which the FTC first applied in the 2019 Cambridge Analytica case).

Algorithmic disgorgement involves “requir[ing] violators to disgorge not only the ill-gotten data, but also the benefits—here, the algorithms—generated from that data,” as then-Acting FTC Chair Rebecca Kelly Slaughter stated in a speech last year. This order appears to be the first time algorithmic disgorgement was applied by the Commission in an enforcement action under COPPA.

Children’s privacy issues continue to attract the attention of the FTC and lawmakers at both federal and state levels. Companies that collect children’s personal information should be careful to ensure that their privacy policies and practices fully conform to the COPPA Rule.

© 2022 Keller and Heckman LLP