Mid-Year Recap: Think Beyond US State Laws!

Much of the focus on US privacy has been US state laws, and the potential of a federal privacy law. This focus can lead one to forget, however, that US privacy and data security law follows a patchwork approach both at a state level and a federal level. “Comprehensive” privacy laws are thus only one piece of the puzzle. There are federal and state privacy and security laws that apply based on a company’s (1) industry (financial services, health care, telecommunications, gaming, etc.), (2) activity (making calls, sending emails, collecting information at point of purchase, etc.), and (3) the type of individual from whom information is being collected (children, students, employees, etc.). There have been developments this year in each of these areas.

On the industry law, there has been activity focused on data brokers, those in the health space, and for those that sell motor vehicles. The FTC has focused on the activities of data brokers this year, beginning the year with a settlement with lead-generation company Response Tree. It also settled with X-Mode Social over the company’s collection and use of sensitive information. There have also been ongoing regulation and scrutiny of companies in the health space, including HHS’s new AI transparency rule. Finally, in this area is a new law in Utah, with a Motor Vehicle Data Protection Act applicable to data systems used by car dealers to house consumer information.

On the activity side, there has been less news, although in this area the “activity” of protecting information (or failing to do so) has continued to receive regulatory focus. This includes the SEC’s new cybersecurity reporting obligations for public companies, as well as minor modifications to Utah’s data breach notification law.

Finally, there have been new laws directed to particular individuals. In particular, laws intended to protect children. These include social media laws in Florida and Utah, effective January 1, 2025 and October 1, 2024 respectively. These are similar to attempts to regulate social media’s collection of information from children in Arkansas, California, Ohio and Texas, but the drafters hope sufficiently different to survive challenges currently being faced by those laws. The FTC is also exploring updates to its decades’ old Children’s Online Privacy Protection Act.

Putting It Into Practice: As we approach the mid-point of the year, now is a good time to look back at privacy developments over the past six months. There have been many developments in the privacy patchwork, and companies may want to take the time now to ensure that their privacy programs have incorporated and addressed those laws’ obligations.

Listen to this post

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

No Arbitration for Lead Buyer: Consent Form Naming Buyer Does Not Give Buyer Right to Enforce Arbitration in Tcpa Class Action

A subsidiary of Move, Inc. bought a data lead from Nations Info, Corp. off of its HudHomesUsa.org website. The subsidiary made an outbound prerecorded call resulting in a TCPA lawsuit against Move. (Fun.)

Move, Inc. moved to compel arbitration arguing that since its subsidiary was named in the consent form–it argued the arbitration clause necessarily covered it because the whole purpose of the clause was to permit parties buying leads from the website to compel TCPA cases to arbitration.

Good argument, but the court disagreed.

In Faucett v. Move, Inc. 2024 WL 2106727 (C.D. Cal. April 22, 2024) the Court refused to enforce the arbitration clause finding that Move, Inc. was not a signatory to the agreement and could not enforce it under any theory.

Most interestingly, Move argued that a motivating purpose behind the publisher’s arbitration clause was to benefit Move because Nations Info listed Opcity —Move’s subsidiary—in the Consent Form as a company that could send marketing messages to Hud’s users. (Mot. 1–2.)

But the Court found the Terms and Consent Form were two different documents, and accepting the one did not change the scope of the other.

The Court also found equitable estoppel did not apply because Plaintiff was not moving to enforce the terms of the agreement. Quite the contrary, Plaintiff denied any arbitration (or consent) agreement existed.

So Move is stuck.

Pretty clear lesson here: lead buyers should make sure the arbitration provisions on any website they are buying leads from includes third-parties (like the buyer) as a party to the clause. Failing to do so may leave the lead buyer stuck without the ability to enforce the provision–and that can lead to a massive class action with potential exposure in the hundreds of millions or billions of dollars.

Lead buyers are already forcing sellers to revise their flows in light of the FCC’s new one to one consent rules. So now would be a GREAT time to revisit requirements around arbitration provisions as well.

Something to think about.

Continuing Forward: Senate Leaders Release an AI Policy Roadmap

The US Senate’s Bipartisan AI Policy Roadmap is a highly anticipated document expected to shape the future of artificial intelligence (AI) in the United States over the next decade. This comprehensive guide, which complements the AI research, investigations, and hearings conducted by Senate committees during the 118th Congress, identifies areas of consensus that could help policymakers establish the ground rules for AI use and development across various sectors.

From intellectual property reforms and substantial funding for AI research to sector-specific rules and transparent model testing, the roadmap addresses a wide range of AI-related issues. Despite the long-awaited arrival of the AI roadmap, Sen. Chuck Schumer (D-NY), the highest-ranking Democrat in the Senate and key architect of the high-level document, is expected to strongly defer to Senate committees to continue drafting individual bills impacting the future of AI policy in the United States.

The Senate’s bipartisan roadmap is the culmination of a series of nine forums held last year by the same group, during which they gathered diverse perspectives and information on AI technology. Topics of the forums included:

  1. Inaugural Forum
  2. Supporting US Innovation in AI
  3. AI and the Workforce
  4. High Impact Uses of AI
  5. Elections and Democracy
  6. Privacy and Liability
  7. Transparency, Explainability, Intellectual Property, and Copyright
  8. Safeguarding
  9. National Security

The wide range of views and concerns expressed by over 150 experts including developers, startups, hardware and software companies, civil rights groups, and academia during these forums helped policymakers develop a thorough and inclusive document that reveals the areas of consensus and disagreement. As the 118th Congress continues, it’s expected that Sen. Schumer will reach out to his counterparts in the US House of Representatives to determine the common areas of interest. Those bipartisan and bicameral conversations will ultimately help Congress establish the foundational rules for AI use and development, potentially shaping not only the future of AI in the United States but also influencing global AI policy.

The final text of this guiding document focuses on several high-level categories. Below, we highlight a handful of notable provisions:

Publicity Rights (Name, Image, and Likeness)

The roadmap encourages senators to consider whether there is a need for legislation that would protect against the unauthorized use of one’s name, image, likeness, and voice, as it relates to AI. While state laws have traditionally recognized the right of individuals to control the commercial use of their so-called “publicity rights,” federal recognition of those rights would mark a major shift in intellectual property law and make it easier for musicians, celebrities, politicians, and other prominent public figures to prevent or discourage the unauthorized use of their publicity rights in the context of AI.

Disclosure and Transparency Requirements

Noting that the “black box” nature of some AI systems can make it difficult to assess compliance with existing consumer protection and civil rights laws, the roadmap encourages lawmakers to ensure that regulators are able to access information directly relevant to enforcing those laws and, if necessary, place appropriate transparency and “explainability” requirements on “high risk” uses of AI. The working group does not offer a definition of “high risk” use cases, but suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about their training data and factors that influence automated or algorithmic decision making. The roadmap also encourages the development of best practices for when AI users should disclose that their products utilize AI, and whether developers should be required to disclose information to the public about the data sets used to train their AI models.

The document also pushes senators to develop sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation.

Increased Funding for AI Innovation

On the heels of the findings included in the National Security Commission on Artificial Intelligence’s (NSCAI) final report, the roadmap encourages Senate appropriators to provide at least $32 billion for AI research funding at federal agencies, including the US Department of Energy, the National Science Foundation, and the National Institute of Standards and Technology. This request for a substantial investment underscores the government’s commitment to advancing AI technology and seeks to position federal agencies as “AI ready.” The roadmap’s innovation agenda includes funding the CHIPS and Science Act, support for semiconductor research and development to create high-end microchips, modernizing the federal government’s information technology infrastructure, and developing in-house supercomputing and AI capacity in the US Department of Defense.

Investments in National Defense

Many members of Congress believe that creating a national framework for AI will also help the United States compete on the global stage with China. Senators who see this as the 21st century space race believe investments in the defense and intelligence community’s AI capabilities are necessary to push back against China’s head start in AI development and deployment. The working group’s national security priorities include leveraging AI’s potential to build a digital armed services workforce, enhancing and accelerating the security clearance application process, blocking large language models from leaking intelligence or reconstructing classified information, and pushing back on perceived “censorship, repression, and surveillance” by Russia and China.

Addressing AI in Political Ads

Looking ahead to the 2024 election cycle, the roadmap’s authors are already paying attention to the threats posed by AI-generated election ads. The working group encourages digital content providers to watermark any political ads made with AI and include disclaimers in any AI-generated election content. These guardrails also align with the provisions of several bipartisan election-related AI bills that passed out of the Senate Rules Committee the same day of the roadmap’s release.

Privacy and Legal Liability for AI Usage

The AI Working Group recommends the passage of a federal data privacy law to protect personal information. The AI Working Group notes that the legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and the role of data brokers. Support for these principles is reflected in numerous state privacy laws enacted since 2018, and in bipartisan, bicameral draft legislation (the American Privacy Rights Act) supported by Rep. McMorris Rogers (D-WA), and Sen. Maria Cantwell (D-WA).

As we await additional legislative activity later this year, it is clear that these guidelines will have far-reaching implications for the AI industry and society at large.

CEQ Finalizes “Phase 2” Revisions to NEPA Implementing Regulations

The Council on Environmental Quality (“CEQ”) is tasked with issuing National Environmental Policy Act (“NEPA”) regulations to guide federal agencies in its implementation. In 2021, CEQ began a two-phase process to revise these regulations. “Phase 1” largely reversed several changes made to the regulations in 2020 under the prior Trump Administration, including key changes relating to defining “purpose and need” and the long-used concepts of direct, indirect, and cumulative effects. The new “Phase 2” revisions are more extensive. Some of the Phase 2 revisions codify in regulation amendments to NEPA made by the Fiscal Responsibility Act of 2023 (“FRA”) and intended to improve the efficiency of the NEPA process, such as establishing page limits for environmental documents and facilitating the use of categorical exclusions (“CEs”). The Phase 2 revisions also restore additional concepts or provisions from the 1978 regulations and case law interpreting those regulations, remove additional changes made in 2020 that CEQ now “considers imprudent,” and, for the first time, specifically require consideration of effects relevant to environmental justice and climate change. We highlight some of these changes below.

The Phase 2 Final Rule will impact a broad range of projects needing federal authorizations or funding. Many of the efficiency measures included in the Final Rule implement changes that were enacted in the FRA. Although these changes could help address some long-standing issues in the NEPA process around delays and litigation, the effect of the proposed changes will be highly dependent on how the individual federal agencies carry out the changes through their own procedures and implementing regulations. Moreover, the Phase 2 Final Rule makes other important changes to the regulations that, rather than streamlining and improving efficiency, could increase burdens and challenges associated with NEPA compliance.

The Phase 2 Final Rule is scheduled to go into effect on July 1, 2024. However, industry groups and others already have signaled their frustration with these revisions, including several key members of Congress, led by Senator Joe Manchin, who have announced that they will seek to overturn the Phase 2 Final Rule using the Congressional Review Act.

Provisions Directed Towards Promoting Efficiency and Streamlining

Page Limits and Timelines. The Final Rule makes many small and some larger changes to promote efficiency and streamline the NEPA process. The Final Rule incorporates the FRA’s page limits of 75 pages for environmental assessments (“EAs”), 150 pages for environmental impact statements (“EISs”), and 300 pages for EISs of “extraordinary complexity.” It includes the FRA’s time limits for completion of NEPA documents, requiring completion of EAs within one year and EISs within two years, although it allows for an agency to extend this deadline, in consultation with any project applicant, to the extent necessary to complete the document. To further promote efficiency, the Final Rule also requires agencies to set deadlines and schedules appropriate to specific actions or types of actions.

Categorical Exclusions. The Final Rule also makes substantial changes to its regulations governing CEs that should facilitate agencies’ adoption of CEs as a tool to streamline NEPA compliance in certain circumstances, as allowed under the FRA. It sets forth a process for agencies to adopt and utilize other agencies’ CEs, as allowed under the FRA without having to amend their regulations. The Final Rule clarifies that agencies can establish CEs individually as well as jointly with other agencies. And it allows agencies to establish CEs through land use plans, decision documents supported by a programmatic EIS or EA, or similar planning or programmatic decisions, without having to go through a separate rulemaking process. According to CEQ, by expanding the means by which agencies can establish CEs, these changes will, among other things, encourage agencies to undertake programmatic and planning reviews, as well as promote and speed the process for establishing CEs.

Programmatic Reviews and Tiering. The Final Rule includes various revisions to codify best practices for the use of programmatic NEPA reviews and tiering, which CEQ acknowledges “are important tools to facilitate more efficient environmental reviews and project approvals.”

Provisions that Could Increase NEPA Compliance Burdens

While the Phase 2 Final Rule emphasizes efficiency, it includes a range of regulatory changes that could have the opposite effect, creating additional burdens and potentially perpetuating opportunities for contentious litigation.

Climate Change, Environmental Justice, and Tribal Resources. Reflected in a wide range of revisions to the regulations, the Phase 2 Final Rule aims to further advance the Biden Administration’s policy focus on climate change, environmental justice, and Tribal resources. Among other provisions, the Final Rule explicitly requires agencies to analyze “disproportionate and adverse human health and environmental effects on communities with environmental justice concerns” and climate change-related effects, including quantification of greenhouse gas emissions where feasible, in their NEPA reviews. Agencies also must review these effects, as well as effects on Tribal rights and resources, in identifying the environmentally preferable alternative or alternatives. Similarly, the Final Rule defines “extraordinary circumstances”—which agencies must consider in determining whether to apply a CE—to include potential substantial disproportionate and adverse effects on communities with environmental justice concerns, potential substantial climate change effects, and potential substantial effects on historic or cultural properties. Moreover, agencies now “should, where relevant and appropriate, incorporate mitigation measures” to address effects “that disproportionately and adversely affect communities with environmental justice concerns.” And the Final Rule directs agencies, where appropriate, to use projections when evaluating climate change-related effects, including relying on models to project a range of possible future outcomes, provided that they disclose relevant assumptions or limitations. While these codifications are new—particularly the regulation directing agencies to consider mitigation for impacts to environmental justice communities—most agencies have been including some environmental justice and greenhouse gas emission impacts in their NEPA reviews based upon federal governmentwide and agency policy and court precedent.

Major Federal Actions. Implementing changes in the FRA and further responding to changes made in the 2020 rule, the Final Rule revises the definition of “major federal action”—the trigger for environmental review under NEPA. The FRA, in addition to specifying that a major federal action requires “substantial Federal control and responsibility,” established several exclusions including for certain types of projects receiving loans, loan guarantees, or other types of federal financial assistance. In an effort to address some of the uncertainty raised by these exclusions, the revised regulations provide that major federal actions generally include “[p]roviding more than a minimal amount of financial assistance, . . . where the agency has the authority to deny in whole or in part the assistance due to environmental effects, has authority to impose conditions on the receipt of the financial assistance to address environmental effects, or otherwise has sufficient control and responsibility over the subsequent use of the financial assistance” or effects of the funded activity.

Alternatives. The Phase 2 Final Rule clarifies that agencies are not required to consider “every conceivable alternative to a proposed action” but rather only “a reasonable range of alternatives that will foster informed decision making.” Additionally, the revised regulations provide that agencies have the discretion, but are not required, to include reasonable alternatives not within the lead agency’s jurisdiction. CEQ continues to anticipate that this will occur relatively infrequently and notes that such alternatives still must be technically and economically feasible and meet the proposed action’s purpose and need. The Final Rule also requires that environmental documents (and not just records of decision) identify one or more environmentally preferable alternatives, which could be the proposed action, the no action alternative, or a reasonable alternative.

Mitigation. Although NEPA has long been understood to be a procedural, rather than substantive, requirement, the Phase 2 Final Rule includes several provisions intended to encourage agencies to mitigate the impacts of proposed actions and to ensure that mitigation measures that agencies rely on in making their environmental determinations are actually carried out. When an agency incorporates and relies upon mitigation measures—whether in its analysis of reasonably foreseeable effects or in a mitigated finding of no significant impact—the revised regulations require the agency to explain the enforceable mitigation requirements or commitments to be undertaken and the authority to enforce them (for example, permit conditions, agreements, or other measures), and to prepare a monitoring and compliance plan.

Development of New Information. While agencies generally historically have not been required to develop data that was not readily available, CEQ “now considers it vital to the NEPA process for agencies to undertake studies and analyses” that provide information “essential to a reasoned choice among alternatives,” provided the overall costs are not unreasonable, and includes provisions to that effect in the Final Rule.

Exhaustion, Judicial Review, and Remedies. The Phase 2 Final Rule removes several changes included in the 2020 rule relating to exhaustion, judicial review, and remedies that were intended to reduce NEPA-related litigation and project delays.

The Phase 2 revisions take effect on July 1, 2024, and apply to any NEPA process that commences after that date, although the Final Rule states that agencies may apply them to ongoing activities and environmental documents that commence prior to that date. In addition to following the CEQ regulations, agencies also have adopted agency-specific NEPA implementing procedures. Agencies must revise these procedures to incorporate changes necessitated by the Phase 2 Final Rule by July 1, 2025.

FTC: Three Enforcement Actions and a Ruling

In today’s digital landscape, the exchange of personal information has become ubiquitous, often without consumers fully comprehending the extent of its implications.

The recent actions undertaken by the Federal Trade Commission (FTC) shine a light on the intricate web of data extraction and mishandling that pervades our online interactions. From the seemingly innocuous permission requests of game apps to the purported protection promises of security software, consumers find themselves at the mercy of data practices that blur the lines between consent and exploitation.

The FTC’s proposed settlements with companies like X-Mode Social (“X Mode”) and InMarket, two data aggregators, and Avast, a security software company, underscores the need for businesses to appropriately secure and limit the use of consumer data, including previously considered innocuous information such as browsing and location data. In a world where personal information serves as currency, ensuring consumer privacy compliance has never been more critical – or posed such a commercial risk for failing to get it right.

X-Mode and InMarket Settlements: The proposed settlements with X-Mode and InMarket concern numerous allegations based on the mishandling of consumers’ location data. Both companies supposedly collected precise location data through their own mobile apps and those of third parties (through software development kits).  X-Mode is alleged to have sold precise location data (advertised as being 70% accurate within 20 meters or less) linked to timestamps and unique persistent identifiers (i.e., names, email addresses, etc.) of its consumers to private government contractors without obtaining proper consent. Plotting this data on a map makes it easy to reveal each person’s movements over time.

InMarket purportedly utilized location data to cross-reference such data with points of interest to sort consumers into particularized audience segments for targeted advertising purposes without adequately informing consumers – examples of audience segments include parents of preschoolers, Christian church attendees, and “wealthy and not healthy,” among other groupings.

Avast Settlement: Avast, a security software company, allegedly sold granular and re-identifiable browsing information of its consumers despite assuring consumers it would protect their privacy. Avast allegedly collected extensive browsing data of its consumers through its antivirus software and browser extensions while ensuring its consumers that their browsing data would only be used in aggregated and anonymous form. The data collected by Avast revealed visits to various websites that could be attributed to particular people and allowed for inferences to be drawn about such individuals – examples include academic papers on symptoms of breast cancer, education courses on tax exemptions, government jobs in Fort Meade, Maryland with a salary over $100,000, links to FAFSA applications and directions from one location to another, among others.

Sensitivity of Browsing and Location Data

It is important to note that none of the underlying datasets in question contained traditional types of personally identifiable information (e.g., name, identification numbers, physical descriptions, etc.) (“PII”). Even still, the three proposed settlements by the FTC underscore the sensitive nature of browsing and location data due to the insights such data reveals, such as religious beliefs, health conditions, and financial status, and the ease with which the insights can be linked to certain individuals.

In the digital age, the amount of data available about individuals online and collected by various companies makes the re-identification of individuals easier every day. Even when traditional PII is not included in a data set, by linking sufficient data points, a profile or understanding of an individual can be created. When such profile is then linked to an identifier (such as username, phone number, or email address provided when downloading an app or setting up an account on an app) and cross-referenced with various publicly available data, such as name, email, phone number or content on social media sites, it can allow for deep insights into an individual. Despite the absence of traditional types of PII, such data poses significant privacy risks due to the potential for re-identification and the intimate details about individuals’ lives that it can divulge.

The FTC emphasizes the imperative for companies to recognize and treat browsing and location data as sensitive information and implement appropriate robust safeguards to protect consumer privacy. This is especially true when the data set includes information with the precision of those cited by the FTC in its proposed settlements.

Accountability and Consent

With browsing and location data, there is also a concern that the consumer may not be fully aware of how their data is used. For instance, Avast claimed to protect consumers’ browsing data and then sold that very same browsing information, often without notice to consumers. When Avast did inform customers of their practices, the FTC claims it deceptively stated any sharing would be “anonymous and aggregated.” Similarly, X-Mode claimed it would use location data for ad-personalization and location-based analytics. Consumers were unaware such location data was also sold to government contractors.

The FTC has recognized that a company may need to process an individual’s information to provide them with services or products requested by the individual. The FTC also holds that such processing does not mean the company is then free to collect, access, use, or transfer that information for other purposes (e.g., marketing, profiling, background screening, etc.). Essentially, purpose matters. As the FTC explains, a flashlight app provider cannot collect, use, store, or share a user’s precise geolocation data, or a tax preparation service cannot use a customer’s information to market other products or services.

If companies want to use consumer personal information for purposes other than providing the requested product or services, the FTC states that companies should inform consumers of such uses and obtain consent to do so.

The FTC aims to hold companies accountable for their data-handling practices and ensure that consumers are provided with meaningful consent mechanisms. Companies should handle consumer data only for the purposes for which data was collected and honor their privacy promises to consumers. The proposed settlements emphasize the importance of transparency, accountability, meaningful consent, and the prioritization of consumer privacy in companies’ data handling practices.

Implementing and Maintaining Safeguards

Data, especially specific data that provide insights and inferences about individuals, is extremely valuable to companies, but it is that same data that exposes such individuals’ privacy. Companies that sell or share information sometimes include limitations for the use of the data, but not all contracts have such restrictions or sufficient restrictions to safeguard individuals’ privacy.

For instance, the FTC alleges that some of Avast’s underlying contracts did not prohibit the re-identification of Avast’s users. Where Avast’s underlying contracts prohibited re-identification, the FTC alleges that purchasers of the data were still able to match Avast users’ browsing data with information from other sources if the information was not “personally identifiable.” Avast also failed to audit or confirm that purchasers of data complied with its prohibitions.

The proposed complaint against X-Mode recognized that at least twice, X-Mode sold location data to purchasers who violated restrictions in X-Mode’s contracts by reselling the data they bought from X-Mode to companies further downstream. The X-Mode example shows that even when restrictions are included in contracts, they may not prevent misuse by subsequent downstream parties.

Ongoing Commitment to Privacy Protection:

The FTC stresses the importance of obtaining informed consent before collecting or disclosing consumers’ sensitive data, as such data can violate consumer privacy and expose them to various harms, including stigma and discrimination. While privacy notices, consent, and contractual restrictions are important, the FTC emphasizes they need to be backed up by action. Accordingly, the FTC’s proposed orders require companies to design, implement, maintain, and document safeguards to protect the personal information they handle, especially when it is sensitive in nature.

What Does a Company Need To Do?

Given the recent enforcement actions by the FTC, companies should:

  1. Consider the data it collects and whether such data is needed to provide the services and products requested by the consumer and/or a legitimate business need in support of providing such services and products (e.g., billing, ongoing technical support, shipping);
  2. Consider browsing and location data as sensitive personal information;
  3. Accurately inform consumers of the types of personal information collected by the company, its uses, and parties to whom it discloses the personal information;
  4. Collect, store, use, or share consumers’ sensitive personal information (including browser and location data) only with such consumers’ informed consent;
  5. Limit the use of consumers’ personal information solely to the purposes for which it was collected and not market, sell, or monetize consumers’ personal information beyond such purpose;
  6. Design, Implement, maintain, document, and adhere to safeguards that actually maintain consumers’ privacy; and
  7. Audit and inspect service providers and third-party companies downstream with whom consumers’ data is shared to confirm they are (a) adhering to and complying with contractual restrictions and (b) implementing appropriate safeguards to protect such consumer data.

A New Day for “Natural” Claims?

On May 2, the Second Circuit upheld summary judgment in favor of KIND in a nine year old lawsuit challenging “All Natural” claims. In Re KIND LLC, No. 22-2684-cv (2d Cir. May 2, 2024). Although only time will tell, this Circuit decision, in favor of the defense, may finally change plaintiffs’ appetite for “natural” cases.

Over the many years of litigation, the lawsuit consolidated several class action filings from New York, Florida, and California into a single, multi-district litigation with several, different lead plaintiffs. All plaintiffs alleged that “All Natural” claims for 39 KIND granola bars and other snacks were deceptive. Id. at 3. Plaintiff had alleged that the following ingredients rendered the KIND bars not natural: soy lecithin, soy protein isolate, citrus pectin, glucose syrup/”non-GMO” glucose, vegetable glycerine, palm kernel oil, canola oil, ascorbic acid, vitamin A acetate, d-alpha tocopheryl acetate/vitamin E, and annatto.

The Second Circuit found that, in such cases, the relevant state laws followed a “reasonable consumer standard” of deception. Id. at 10. Further, according to the Second Circuit, the “Ninth Circuit has helpfully explained” that the reasonable consumer standard requires “‘more than a mere possibility that the label might conceivably be misunderstood by some few consumers viewing it in an unreasonable manner.’” Id. (quoting McGinity v. Procter & Gamble Co., 69 F.4th 1093, 1097 (9th Cir. 2023)). Rather, there must be “‘a probability that a significant portion of the general consuming public or of targeted consumers, acting reasonably in the circumstances, could be misled.’” Id. To defeat summary judgement, the plaintiffs would need to present admissible evidence showing how “All Natural” tends to mislead under this standard.

The Second Circuit agreed with the lower court that plaintiffs’ deposition testimony failed to provide such evidence where it failed to “establish an objective definition” representing reasonable consumer understanding of “All Natural.” Id. at 28. While one plaintiff believed the claim meant “not synthetic,” another thought it meant “made from whole grains, nuts, and fruit,” while yet another believed it meant “literally plucked from the ground.” Id. The court observed that plaintiffs “fail[ed] to explain how a trier of fact could apply these shifting definitions.” Id. The court next rejected as useful evidence a dictionary definition of “natural,” which stated, “existing or caused by nature; not made or caused by humankind.” Id. at 29. The court reasoned that the dictionary definition was “not useful when applied to a mass-produced snack bar wrapped in plastic” – something “clearly made by humans.” Id.

The court, finally, upheld the lower court’s decision to exclude two other pieces of evidence the plaintiffs offered. First, the Second Circuit agreed that a consumer survey was subject to exclusion where leading questions biased the results. Id. at 21-22. The Second Circuit also agreed that an expert report by a chemist lacked relevance where it assessed “typical” sourcing of ingredients, not necessarily how KIND’s ingredients were manufactured or sourced. Id. at 22-24.

© 2024 Keller and Heckman LLP
by: Food and Drug Law at Keller and Heckman of Keller and Heckman LLP

For more news on Food Advertising Litigation, visit the NLR Biotech, Food, Drug section.

Get Off the Beaten Path: Three Ways Outsourcing Can Help Firms Achieve CRM & Data Quality Success

Normally, the path most traveled is thought to be the better road as it represents the path that leads to achieving goals and success while the less traveled path leads to stressful processes and unknowns.

But for firms trying to achieve CRM success, the “beaten path” involves investing tens of thousands of dollars into the latest and greatest technology and hiring internal Data Stewards to maintain the data flowing into the system. This can take up a significant number of firm resources and there is no guarantee that CRM Success will be achieved.

Let’s face it, the traditional approach to CRM and Data Quality Success often leads to more headaches and challenges than it does to success. Without the right experience and expertise, leading a CRM implementation project or a data quality clean-up can be disastrous.

Hundreds of thousands of records flow in from departmental databases which need to be analyzed and categorized properly. Meetings need to be held with firm leadership to understand their expectations for the system, and meetings need to be coordinated with vendors to set up demonstrations along with Requests For Proposals (RFPs).

To add more fuel to the fire, meetings also need to be held with end users to understand their needs and requirements so system selection can be catered to them. In the end, firms are left with high training and implementation costs; limited staffing pools due to required expertise; and increased employee burnout due to the overwhelming nature of the work.

The Path Less Traveled: Outsourcing

Many forward-thinking firms have taken the path less traveled to CRM success and have outsourced many of their core marketing technology positions and data quality work to trusted service providers. Outsourced Marketing Technology Managers and Data Stewards can provide all the benefits of retaining these positions in-house at a cost-efficient price all while reducing managerial headaches.

The route less traveled gives you access to a pool of highly skilled professionals without the additional costs associated with hiring internally. Many outsourced Marketing Technology Managers and Data Stewards have years of industry experience working with the nation’s top firms tackling complex data quality issues and guiding implementations ensuring they are implemented and integrated effectively.

To achieve CRM and data quality success, sometimes the beaten path won’t get you there. Here are three ways taking the path less traveled can help you achieve CRM and data quality success:

1. Cost Savings

Utilizing outsourced service providers for marketing technology or data quality roles can help firms save a significant amount of money. For firms with around 250 professionals, hiring an internal CRM Manager and Data Steward can cost firms around $116,640.

For firms that have limited resources and budgets, outsourcing providers offer various pricing models for their services. From contracting their workers on an as-needed basis for short-term or long-term projects to paying-as-you-go. This allows firms to allocate more of their investments to higher-priority projects or initiatives. Depending on the rate of the service provider, firms can expect to pay up to 33% less ($77,350) when they outsource their core marketing technology and data quality work.

2. Improved Data Quality

Opposed to internal Data Stewards, outsourced data quality professionals can focus on key responsibilities and can work more efficiently than their internal counterparts who have to focus on other tasks or priorities. These outsourced professionals understand the intricacies of the professional service industry and seamlessly fit into your firm’s day-to-day processes.

Outsourced Data Stewards have the ability and know-how to implement data standardization processes and protocols, minimizing the number of dirty records that may flow into the system. They also have access to industry-leading tools that can streamline and automate data management so your attorneys and professionals can worry less about maintaining their contacts and more about serving their clients.

3. Reduction In Turnover

Traditionally, hiring Data Stewards internally has been a revolving door, where firms would hire a new team member to maintain their data quality, train them, compensate them, motivate them, then, replace them. Given how outsourced service providers are not directly involved with the firm’s core services, they assume the role of finding, hiring, training, motivating and managing the data quality professional.

This frees up your marketing and business development teams to focus on growing the firm and nurturing client relationships rather than chasing down contact data from the organization’s professionals. They can help you with a wide range of data-related activities including:

  • Regularly reviewing new records
  • Enhancing records with geographical information, financial data, or who-knows-who relationships
  • Creation and management of segmented and targeted lists for marketing or business development campaigns

To achieve CRM and data quality success, sometimes the beaten path won’t get you there. So, if you are struggling with your marketing technology or data quality, don’t be afraid to explore alternate routes, like outsourcing. It can open your firm up to a pool of highly skilled professionals who have years of experience solving the same issues you may be going through. An outsourced team can provide your firm with significant cost savings, improved data quality, and a reduction in employee turnover and managerial headaches.

These operational efficiencies lead to greater productivity and returns on marketing spend – meaning greater profitability for the firm.

How Lawyers Can Effectively Leverage Their Published Articles

Writing and publishing articles or blog posts can be a powerful branding and business development tool for lawyers. Not only do they demonstrate your expertise in your practice area, but they also significantly enhance your visibility and credibility.

However, your work doesn’t end once the article is published – in fact, it’s just beginning. Here are some tips to maximize the value, reach and impact of your published work.

1. Optimize for Online Search First and foremost, ensure your article is search engine optimized (SEO). This means incorporating relevant keywords that potential clients might use to find information related to your legal expertise. SEO increases the visibility of your content on search engines like Google, making it easier for your target audience to find you.

2. Share on Social Media Utilize your personal and professional social media platforms to share your article. LinkedIn, Twitter and even Facebook are excellent venues for reaching other professionals and potential clients. Don’t just share it once; periodically repost it, especially if the topic is evergreen. Engage with comments and discussions to further boost your post’s visibility.

3. Incorporate Into Newsletters If you or your firm sends out a regular newsletter, include a link to your article. This not only provides added value to your subscribers but also keeps your existing client base engaged with your latest insights and activities. This approach can help reinforce your position as a thought leader in your field. Also, consider launching a LinkedIn newsletter. LinkedIn’s platform offers a unique opportunity to reach a professional audience directly, increasing the potential for networking and attracting new clients who are actively interested in your area of expertise.

4. Speak at Conferences and Seminars Use your article as a springboard to secure speaking engagements. Conferences, seminars and panel discussions often look for experts who can contribute interesting insights. Your article can serve as a proof of your expertise and a teaser of your presentation content, making you an attractive candidate for these events.

5. Create Multimedia Versions Expand the reach of your article by adapting it into different formats. Consider recording a podcast episode discussing the topic in depth, or creating a short-form video for LinkedIn and YouTube. These formats can attract different segments of your audience and make the content more accessible.

6. Network Through Professional Groups Share your article in professional groups and online forums in your field, as well as alumni groups (law school, undergrad school and former firms). This can lead to discussions with peers and can even attract referrals. Active participation in these groups, coupled with sharing insightful content, can significantly expand your professional network.

7. Use as a Teaching Resource Offer to guest lecture at local law schools and use your article as a teaching resource. This not only enhances your reputation as an expert but also builds relationships with the upcoming generation of lawyers who could become colleagues or refer clients in the future.

8. Repurpose Content for Blogs or Articles Break down the article into smaller blog posts or develop certain points further into new articles. This can help maintain a consistent stream of content on your website, which is good for SEO and keeps your audience engaged over time.

9. Monitor and Engage with Feedback Keep an eye on comments and feedback from your article across all platforms. Engaging with readers can provide insights into what your audience finds useful, shaping your future writing to better meet their needs. It also helps in building a loyal following.

10. Track Metrics Utilize analytics tools (web, social media and email) to track how well your article performs in terms of views, shares and engagement. This data can help you understand what works and what doesn’t, guiding your content strategy for future articles.

11. Leverage the Power of Content Repurposing Content repurposing can significantly extend the life and reach of your original article. By transforming the article into different content formats—such as infographics, webinars, slide decks or even e-books—you cater to various learning styles and preferences, reaching a broader audience. This strategy not only maximizes your content’s exposure but also enhances engagement by presenting the information in new, accessible ways. Repurposing content can help solidify your reputation as a versatile and resourceful expert in your field.

Publishing an article or blog post is just the beginning. By strategically promoting and leveraging your published works, you can enhance your visibility, establish yourself as a thought leader and attract more clients. Every article has the potential to open new doors; it’s up to you to make sure it does!