The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

Texas Attorney General Launches Investigation into 15 Tech Companies

Texas Attorney General Ken Paxton recently launched investigations into Character.AI and 14 other technology companies on allegations of failure to comply with the safety and privacy requirements of the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act.

The SCOPE Act places guardrails on digital service providers, including AI companies, including with respect to sharing, disclosing and selling minors’ personal identifying information without obtaining permission from the child’s parent or legal guardian. Similarly, the Texas Data Privacy and Security Act imposes strict notice and consent requirements on the collection and use of minors’ personal data.

Attorney General Paxton reiterated the Office of the Attorney General’s (“OAG’s”) focus on privacy enforcement, with the current investigations launched as part of the OAG’s recent major data privacy and security initiative. Per that initiative, the Attorney General opened an investigation in June into multiple car manufacturers for illegally surveilling drivers, collecting driver data, and sharing it with their insurance companies. In July, Attorney General Paxton secured a $1.4 billion settlement with Meta over the unlawful collection and use of facial recognition data, reportedly the largest settlement ever obtained from an action brought by a single state. In October, the Attorney General filed a lawsuit against TikTok for SCOPE Act violations.

The Attorney General, in the OAG’s press release announcing the current investigations, stated that technology companies are “on notice” that his office is “vigorously enforcing” Texas’s data privacy laws.

For more on Texas Attorney General Investigations, visit the NLR Communications Media Internet and Consumer Protection sections.

Website Use of Third-Party Tracking Software Not Prohibited Under Massachusetts Wiretap Act

The Supreme Judicial Court of Massachusetts, the state’s highest appellate court, recently held that website operators’ use of third-party tracking software, including Meta Pixel and Google Analytics, is not prohibited under the state’s Wiretap Act.

The decision arose out of an action brought against two hospitals for alleged violations of the Massachusetts Wiretap Act. The complaint alleged that the hospitals’ websites collected and transmitted users’ browsing activities (including search terms and web browser and device configurations) to third parties, including Facebook and Google, for advertising purposes.

Under the Wiretap Act, any person that “willfully commits [, attempts to commit, or procures another person to commit] an interception. . . of any wire or oral communication” is in violation of the statute.

In its opinion, the Court observed the claims at issue involved the interception of person-to-website interactions, rather than person-to-person conversations or messages the law intended to cover. The Court held, “we cannot conclude with any confidence that the Legislature intended ‘communication’ to extend so broadly as to criminalize the interception of web browsing and other such interactions.”

This decision arrives as similarly situated lawsuits remain pending in courts across the nation.

PRIVACY ON ICE: A Chilling Look at Third-Party Data Risks for Companies

An intelligent lawyer could tackle a problem and figure out a solution. But a brilliant lawyer would figure out how to prevent the problem to begin with. That’s precisely what we do here at Troutman Amin. So here is the latest scoop to keep you cool. A recent case in the United States District Court for the Northern District of California, Smith v. Yeti Coolers, L.L.C., No. 24-cv-01703-RFL, 2024 U.S. Dist. LEXIS 194481 (N.D. Cal. Oct. 21, 2024), addresses complex issues surrounding online privacy and the liability of companies who enable third parties to collect and use consumer data without proper disclosures or consent.

Here, Plaintiff alleged that Yeti Coolers (“Yeti”) used a third-party payment processor, Adyen, that collected customers’ personal and financial information during transactions on Yeti’s website. Plaintiff claimed Adyen then stored this data and used it for its own commercial purposes, like marketing fraud prevention services to merchants, without customers’ knowledge or consent. Alarm bells should be sounding off in your head—this could signal a concerning trend in data practices.

Plaintiff sued Yeti under the California Invasion of Privacy Act (“CIPA”) for violating California Penal Code Sections 631(a) (wiretapping) and 632 (recording confidential communications). Plaintiff also brought a claim under the California Constitution for invasion of privacy. The key question here was whether Yeti could be held derivatively liable for Adyen’s alleged wrongful conduct.

So, let’s break this down step by step.

Under the alleged CIPA Section 631(a) violation, the court found that Plaintiff plausibly alleged Adyen violated this Section by collecting customer data as a third-party eavesdropper without proper consent. In analyzing whether Yeti’s Privacy Policy and Terms of Use constituted enforceable agreements, it applied the legal frameworks for “clickwrap” and “browsewrap” agreements.

Luckily, my Contracts professor during law school here in Florida was remarkable, Todd J. Clark, now the Dean of Widner University Delaware Law School. For those who snoozed out during Contracts class during law school, here is a refresher:

Clickwrap agreements present the website’s terms to the user and require the user to affirmatively click an “I agree” button to proceed. Browsewrap agreements simply post the terms via a hyperlink at the bottom of the webpage. For either type of agreement to be enforceable, the Court explained that a website must provide 1) reasonably conspicuous notice of the terms and 2) require some action unambiguously manifesting assent. See Oberstein v. Live Nation Ent., Inc., 60 F.4th 505, 515 (9th Cir. 2023).

The Court held that while Yeti’s pop-up banner and policy links were conspicuous, they did not create an enforceable clickwrap agreement because “Defendant’s pop-up banner does not require individuals to click an “I agree” button, nor does it include any language to imply that by proceeding to use the website, users reasonably consent to Defendant’s terms and conditions of use.” See Smith, 2024 U.S. Dist. LEXIS 194481, at *8. The Court also found no enforceable browsewrap agreement was formed because although the policies were conspicuously available, “Defendant’s website does not require additional action by users to demonstrate assent and does not conspicuously notify them that continuing to use to website constitutes assent to the Privacy Policy and Terms of Use.” Id. at *9.

What is more, the Court relied on Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1179 (9th Cir. 2014), which held that “where a website makes its terms of use available via a conspicuous hyperlink on every page of the website but otherwise provides no notice to users nor prompts them to take any affirmative action to demonstrate assent, even close proximity of the hyperlink to relevant buttons users must click on—without more—is insufficient to give rise to constructive notice.” Here, the Court found the pop-up banner and link on Yeti’s homepage presented the same situation as in Nguyen and thus did not create an enforceable browsewrap agreement.

Thus, the Court dismissed the Section 631(a) claim due to insufficient allegations that Yeti was aware of Adyen’s alleged violations.

However, the Court held that to establish Yeti’s derivative liability for “aiding” Adyen under Section 631(a), Plaintiff had to allege facts showing Yeti acted with both knowledge of Adyen’s unlawful conduct and the intent or purpose to assist it. It found Plaintiff’s allegations that Yeti was “aware of the purposes for which Adyen collects consumers’ sensitive information because Defendant is knowledgeable of and benefitting from Adyen’s fraud prevention services” and “assists Adyen in intercepting and indefinitely storing this sensitive information” were too conclusory. Smith, 2024 U.S. Dist. LEXIS 194481, at *13. It reasoned: “Without further information, the Court cannot plausibly infer from Defendant’s use of Adyen’s fraud prevention services alone that Defendant knew that Adyen’s services were based on its allegedly illegal interception and storing of financial information, collected during Adyen’s online processing of customers’ purchases.” Id.

Next, the Court similarly found that Plaintiff plausibly alleged Adyen recorded a confidential communication without consent in violation of CIPA Section 632. A communication is confidential under this section if a party “has an objectively reasonable expectation that the conversation is not being overheard or recorded.” Flanagan v. Flanagan, 27 Cal. 4th 766, 776-77 (2002). It explained that “[w]hether a party has a reasonable expectation of privacy is a context-specific inquiry that should not be adjudicated as a matter of law unless the undisputed material facts show no reasonable expectation of privacy.” Smith, 2024 U.S. Dist. LEXIS 194481, at *18-19. At the pleading stage, the Court found Plaintiff’s allegation that she reasonably expected her sensitive financial information would remain private was sufficient.

However, as with the Section 631(a) claim, the Court held that Plaintiff did not plead facts establishing Yeti’s derivative liability under the standard for aiding and abetting liability. Under Saunders v. Superior Court, 27 Cal. App. 4th 832, 846 (1994), the Court explained a defendant is liable if they a) know the other’s conduct is wrongful and substantially assist them or b) substantially assist the other in accomplishing a tortious result and the defendant’s own conduct separately breached a duty to the plaintiff. The Court found that the Complaint lacked sufficient non-conclusory allegations that Yeti knew or intended to assist Adyen’s alleged violation. See Smith, 2024 U.S. Dist. LEXIS 194481, at *16.

Lastly, the Court analyzed Plaintiff’s invasion of privacy claim under the California Constitution using the framework from Hill v. Nat’l Coll. Athletic Ass’n, 7 Cal. 4th 1, 35-37 (1994). For a valid invasion of privacy claim, Plaintiff had to show 1) a legally protected privacy interest, 2) a reasonable expectation of privacy under the circumstances, and 3) a serious invasion of privacy constituting “an egregious breach of the social norms.” Id.

The Court found Plaintiff had a protected informational privacy interest in her personal and financial data, as “individual[s] ha[ve] a legally protected privacy interest in ‘precluding the dissemination or misuse of sensitive and confidential information.”‘ Smith, 2024 U.S. Dist. LEXIS 194481, at *17. It also found Plaintiff plausibly alleged a reasonable expectation of privacy at this stage given the sensitivity of financial data, even if “voluntarily disclosed during the course of ordinary online commercial activity,” as this presents “precisely the type of fact-specific inquiry that cannot be decided on the pleadings.” Id. at *19-20.

Conversely, the Court found Plaintiff did not allege facts showing Yeti’s conduct was “an egregious breach of the social norms” rising to the level of a serious invasion of privacy, which requires more than “routine commercial behavior.” Id. at *21. The Court explained that while Yeti’s simple use of Adyen for payment processing cannot amount to a serious invasion of privacy, “if Defendant was aware of Adyen’s usage of the personal information for additional purposes, this may present a plausible allegation that Defendant’s conduct was sufficiently egregious to survive a Motion to Dismiss.” Id. However, absent such allegations about Yeti’s knowledge, this claim failed.

In the end, the Court dismissed Plaintiff’s Complaint but granted leave to amend to correct the deficiencies, so this case may not be over. The Court’s grant of “leave to amend” signals that if Plaintiff can sufficiently allege Yeti’s knowledge of or intent to facilitate Adyen’s use of customer data, these claims could proceed. As companies increasingly rely on third parties to handle customer data, we will likely see more litigation in this area, testing the boundaries of corporate liability for data privacy violations.

So, what is the takeaway? As a brilliant lawyer, your company’s goal should be to prevent privacy pitfalls before they snowball into costly litigation. Key things to keep in mind are 1) ensure your privacy policies and terms of use are properly structured as enforceable clickwrap or browsewrap agreements, with conspicuous notice and clear assent mechanisms; 2) conduct thorough due diligence on third-party service providers’ data practices and contractual protections; 3) implement transparent data collection and sharing disclosures for informed customer consent; and 4) stay abreast of evolving privacy laws.

In essence, taking these proactive steps can help mitigate the risks of derivative liability for third-party misconduct and, most importantly, foster trust with your customers.

Legal and Privacy Considerations When Using Internet Tools for Targeted Marketing

Businesses often rely on targeted marketing methods to reach their relevant audiences. Instead of paying for, say, a television commercial to be viewed by people across all segments of society with varied purchasing interests and budgets, a business can use tools provided by social media platforms and other internet services to target those people most likely to be interested in its ads. These tools may make targeted advertising easy, but businesses must be careful when using them – along with their ease of use comes a risk of running afoul of legal rules and regulations.

Two ways that businesses target audiences are working with influencers who have large followings in relevant segments of the public (which may implicate false or misleading advertising issues) and using third-party “cookies” to track users’ browsing history (which may implicate privacy and data protection issues). Most popular social media platforms offer tools to facilitate the use of these targeting methods. These tools are likely indispensable for some businesses, and despite their risks, they can be deployed safely once the risks are understood.

Some Platform-Provided Targeted Marketing Tools May Implicate Privacy Issues
Google recently announced1 that it will not be deprecating third-party cookies, a reversal from its previous plan to phase out these cookies. “Cookies” are small pieces of code that track users’ activity online. “First-party” cookies often are necessary for the website to function properly. “Third-party” cookies are shared across websites and companies, essentially tracking users’ browsing behaviors to help advertisers target their relevant audiences.

In early 2020, Google announced2 that it would phase out third-party cookies, which are associated with privacy concerns because they track individual web-browsing activity and then share that data with other parties. Google’s 2020 announcement was a response to these concerns.

Fast forward about four and a half years, and Google reversed course. During that time, Google had introduced alternatives to third-party cookies, and companies had developed their own, often extensive, proprietary databases3 of information about their customers. However, none of these methods satisfied the advertising industry. Google then made the decision to keep third-party cookies. To address privacy concerns, Google said it would “introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time.”4

Many large platforms in addition to Google offer targeted advertising services via the use of third-party cookies. Can businesses use these services without any legal ramifications? Does the possibility for consumers to opt out mean that a user cannot be liable for privacy concerns if it relies on third-party cookies? The relevant cases have held that individual businesses still must be careful despite any opt-out and other built-in tools offered by these platforms.

Two recent cases from the Southern District of New York5 held that individual businesses that used “Meta Pixels” to track consumers may be liable for violations of the Video Privacy Protection Act (VPPA). 19 U.S.C. § 2710. Facebook defines a Meta Pixel6 as a “piece of code … that allows you to … make sure your ads are shown to the right people … drive more sales, [and] measure the results of your ads.” In other words, a Meta Pixel is essentially a cookie provided by Meta/Facebook that helps businesses target ads to relevant audiences.

As demonstrated by those two recent cases, businesses cannot rely on a platform’s program to ensure their ad targeting efforts do not violate the law. These violations may expose companies to enormous damages – VPPA cases often are brought as class actions and even a single violation may carry damages in excess of $2,500.

In those New York cases, the consumers had not consented to sharing information, but, even if they had, the consent may not suffice. Internet contracts, often included in a website’s Terms of Service, are notoriously difficult to enforce. For example, in one of those S.D.N.Y. cases, the court found that the arbitration clause to which subscribers had agreed was not effective to force arbitration in lieu of litigation for this matter. In addition, the type of consent and the information that websites need to provide before sharing information can be extensive and complicated, as recently reportedby my colleagues.

Another issue that companies may encounter when relying on widespread cookie offerings is whether the mode (as opposed to the content) of data transfer complies with all relevant privacy laws. For example, the Swedish Data Protection Agency recently found8 that a company had violated the European Union’s General Data Protection Regulation (GDPR) because the method of transfer of data was not compliant. In that case, some of the consumers had consented, but some were never asked for consent.

Some Platform-Provided Targeted Marketing Tools May Implicate False or Misleading Advertising Issues
Another method that businesses use to target their advertising to relevant consumers is to hire social media influencers to endorse their products. These partnerships between brands and influencers can be beneficial to both parties and to the audiences who are guided toward the products they want. These partnerships are also subject to pitfalls, including reputational pitfalls (a controversial statement by the influencer may negatively impact the reputation of the brand) and legal pitfalls.

The Federal Trade Commission (FTC) has issued guidelinesConcerning Use of Endorsements and Testimonials” in advertising, and published a brochure for influencers, “Disclosures 101 for Social Media Influencers,”10 that tells influencers how they must apply the guidelines to avoid liability for false or misleading advertising when they endorse products. A key requirement is that influencers must “make it obvious” when they have a “material connection” with the brand. In other words, the influencer must disclose that it is being paid (or gains other, non-monetary benefits) to make the endorsement.

Many social media platforms make it easy to disclose a material connection between a brand and an influencer – a built-in function allows influencers to simply click a check mark to disclose the existence of a material connection with respect to a particular video endorsement. The platform then displays a hashtag or other notification along with the video that says “#sponsored” or something similar. However, influencers cannot rely on these built-in notifications. The FTC brochure clearly states: “Don’t assume that a platform’s disclosure tool is good enough, but consider using it in addition to your own, good disclosure.”

Brands that sponsor influencer endorsements may easily find themselves on the hook if the influencer does not properly disclose that the influencer and the brand are materially connected. In some cases, the contract between the brand and influencer may pass any risk to the brand. In others, the influencer may be judgement proof, or the brand is an easier target for enforcement. And, unsurprisingly, the FTC has sent warning letters11 threatening high penalties to brands for influencer violations.

The Platform-Provided Tools May Be Deployed Safely
Despite risks involved in some platform-provided tools for targeted marketing, these tools are very useful, and businesses should continue to take advantage of them. However, businesses cannot rely on these widely available and easy-to-use tools but must ensure that their own policies and compliance programs protect them from liability.

The same warning about widely available social media tools and lessons for a business to protect itself are also true about other activities online, such as using platforms’ built-in “reposting” function (which may implicate intellectual property infringement issues) and using out-of-the-box website builders (which may implicate issues under the Americans with Disabilities Act). A good first step for a business to ensure legal compliance online is to understand the risks. An attorney experienced in internet law, privacy law and social media law can help.

_________________________________________________________________________________________________________________

1 https://privacysandbox.com/news/privacy-sandbox-update/

https://blog.chromium.org/2020/01/building-more-private-web-path-towards.html

3 Businesses should ensure that they protect these databases as trade secrets. See my recent Insights at https://www.wilsonelser.com/sarah-fink/publications/relying-on-noncompete-clauses-may-not-be-the-best-defense-of-proprietary-data-when-employees-depart and https://www.wilsonelser.com/sarah-fink/publications/a-practical-approach-to-preserving-proprietary-competitive-data-before-and-after-a-hack

4 https://privacysandbox.com/news/privacy-sandbox-update/

5 Aldana v. GamesStop, Inc., 2024 U.S. Dist. Lexis 29496 (S.D.N.Y. Feb. 21, 2024); Collins v. Pearson Educ., Inc., 2024 U.S. Dist. Lexis 36214 (S.D.N.Y. Mar. 1, 2024)

6 https://www.facebook.com/business/help/742478679120153?id=1205376682832142

7 https://www.wilsonelser.com/jana-s-farmer/publications/new-york-state-attorney-general-issues-guidance-on-privacy-controls-and-web-tracking-technologies

See, e.g., https://www.dataguidance.com/news/sweden-imy-fines-avanza-bank-sek-15m-unlawful-transfer

9 https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B/part-255

10 https://www.ftc.gov/system/files/documents/plain-language/1001a-influencer-guide-508_1.pd

11 https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-american-bev.pdf
https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-canadian-sugar.pdf

FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?

The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry.

The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content; 2) alert consumers to the potential that such voice calls are fraudulent and/or AI-generated; and 3) potentially block future voice calls that can be identified as similar AI-generated or otherwise fraudulent voice calls based on analytics.” (emphasis mine)

The FCC also wants to know “what steps can the Commission take to encourage the development and deployment of these technologies…”

The FCC does note there are “significant privacy risks, insofar as they appear to rely on analysis and processing of the content of calls.” The FCC also wants comments on “what protections exist for non-malicious callers who have a legitimate privacy interest in not having the contents of their calls collected and processed by unknown third parties?”

So, the Federal Communications Commission wants to monitor the CONTENT of voice calls. In real-time. On your device.

That’s not a problem for anyone else?

Sure, robocalls are bad. There are scams on robocalls.

But, are robocalls so bad that we need real-time monitoring of voice call content?

At what point, did we throw the Fourth Amendment out of the window and to prevent what? Phone calls??

The basic premise of the Fourth Amendment is “to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” I’m not sure how we get more arbitrary than “this incoming call is a fraud” versus “this incoming call is not a fraud”.

So, maybe you consent to this real-time monitoring. Sure, ok. But, can you actually give informed consent to what would happen with this monitoring?

Let me give you three examples of “pre-recorded calls” that the real-time monitoring could overhear to determine if the “voice calls are fraudulent and/or AI-generated”:

  1. Your phone rings. It’s a prerecorded call from Planned Parenthood confirming your appointment for tomorrow.
  2. Your phone rings. It’s an artificial voice recording from your lawyer’s office telling you that your criminal trial is tomorrow.
  3. Your phone rings. It’s the local jewelry store saying your ring is repaired and ready to be picked up.

Those are basic examples, but for them to someone to “detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content”, those calls have to be monitored in real-time. And stored somewhere. Maybe on your device. Maybe by a third-party in their cloud.

Maybe you trust Apple with that info. But, do you trust someone who comes up with fraudulent monitoring software that would harvest that data? How do you know you should trust that party?

Or you trust Google. Surely, Google wouldn’t use your personal data. Surely, they would not use your phone call history to sell ads.

And that becomes data a third-party can use. For ads. For political messaging. For profiling.

Yes, this is extremely conspiratorial. But, that doesn’t mean your data is not valuable. And where there is valuable data, there are people willing to exploit it.

Robocalls are a problem. And there are some legitimate businesses doing great things with fraud detection monitoring. But, a real-time monitoring edict from the government is not the solution. As an industry, we can be smarter on how we handle this.

House Committee Postpones Markup Amid New Privacy Bill Updates

On June 27, 2024, the U.S. House of Representatives cancelled the House Energy and Commerce Committee markup of the American Privacy Rights Act (“APRA” or “Bill”) scheduled for that day, reportedly with little notice. There has been no indication of when the markup will be rescheduled; however, House Energy and Commerce Committee Chairwoman Cathy McMorris Rodgers issued a statement reiterating her support for the legislation.

On June 20, 2024, the House posted a third version of the discussion draft of the APRA. On June 25, 2024, two days before the scheduled markup session, Committee members introduced the APRA as a bill, H.R. 8818. Each version featured several key changes from earlier drafts, which are outlined collectively, below.

Notable changes in H.R. 8818 include the removal of two key sections:

  • “Civil Rights and Algorithms,” which required entities to conduct covered algorithm impact assessments when algorithms posed a consequential risk of harm to individuals or groups; and
  • “Consequential Decision Opt-Out,” which allowed individuals to opt out of being subjected to covered algorithms.

Additional changes include the following:

  • The Bill introduces new definitions, such as “coarse geolocation information” and “online activity profile,” the latter of which refines a category of sensitive data. “Neural data” and “information that reveals the status of an individual as a member of the Armed Forces” are added as new categories of sensitive data. The Bill also modifies the definitions of “contextual advertising” and “first-party advertising.”
  • The data minimization section includes a number of changes, such as the addition of “conduct[ing] medical research” in compliance with applicable federal law as a new permitted purpose. The Bill also limits the ability to rely on permitted purposes in processing sensitive covered data, biometric and genetic information.
  • The Bill now allows not only covered entities (excluding data brokers or large data holders), but also service providers (that are not large data holders) to apply for the Federal Trade Commission-approved compliance guideline mechanism.
  • Protections for covered minors now include a prohibition on first-party advertising (in addition to targeted advertising) if the covered entity knows the individual is a minor, with limited exceptions acknowledged by the Bill. It also restricts the transfer of a minor’s covered data to third parties.
  • The Bill adds another preemption clause, clarifying that APRA would preempt any state law providing protections for children or teens to the extent such laws conflict with the Bill, but does not prohibit states from enacting laws, rules or regulations that offer greater protection to children or teens than the APRA.

For additional information about the changes, please refer to the unofficial redline comparison of all APRA versions published by the IAPP.

The Privacy Patchwork: Beyond US State “Comprehensive” Laws

We’ve cautioned before about the danger of thinking only about US state “comprehensive” laws when looking to legal privacy and data security obligations in the United States. We’ve also mentioned that the US has a patchwork of privacy laws. That patchwork is found to a certain extent outside of the US as well. What laws exist in the patchwork that relate to a company’s activities?

There are laws that apply when companies host websites, including the most well-known, the California Privacy Protection Act (CalOPPA). It has been in effect since July 2004, thus predating COPPA by 14 years. Then there are laws the apply if a company is collecting and using biometric identifiers, like Illinois’ Biometric Information Privacy Act.

Companies are subject to specific laws both in the US and elsewhere when engaging in digital communications. These laws include the US federal laws TCPA and TCFAPA, as well as CAN-SPAM. Digital communication laws exist in countries as wide ranging as Australia, Canada, Morocco, and many others. Then we have laws that apply when collecting information during a credit card transaction, like the Song Beverly Credit Card Act (California).

Putting It Into Practice: When assessing your company’s obligations under privacy and data security laws, keep activity specific privacy laws in mind. Depending on what you are doing, and in what jurisdictions, you may have more obligations to address than simply those found in comprehensive privacy laws.

Understanding the Enhanced Regulation S-P Requirements

On May 16, 2024, the Securities and Exchange Commission adopted amendments to Regulation S-P, the regulation that governs the treatment of nonpublic personal information about consumers by certain financial institutions. The amendments apply to broker-dealers, investment companies, and registered investment advisers (collectively, “covered institutions”) and are designed to modernize and enhance the protection of consumer financial information. Regulation S-P continues to require covered institutions to implement written polices and procedures to safeguard customer records and information (the “safeguards rule”), properly dispose of consumer information to protect against unauthorized use (the “disposal rule”), and implementation of a privacy policy notice containing an opt out option. Registered investment advisers with over $1.5 billion in assets under management will have until November 16, 2025 (18 months) to comply, those entities with less will have until May 16, 2026 (24 months) to comply.

Incident Response Program

Covered institutions will have to implement an Incident Response Program (the “Program”) to their written policies and procedures if they have not already done so. The Program must be designed to detect, respond to, and recover customer information from unauthorized third parties. The nature and scope of the incident must be documented with further steps taken to prevent additional unauthorized use. Covered institutions will also be responsible for adopting procedures regarding the oversight of third-party service providers that are receiving, maintaining, processing, or accessing their client’s data. The safeguard rule and disposal rule require that nonpublic personal information received from a third-party about their customers should be treated the same as if it were your own client.

Customer Notification Requirement

The amendments require covered institutions to notify affected individuals whose sensitive customer information was, or is reasonably likely to have been, accessed or used without authorization. The amendments require a covered institution to provide the notice as soon as practicable, but not later than 30 days, after becoming aware that unauthorized access to or use of customer information has occurred or is reasonably likely to have occurred. The notices must include details about the incident, the breached data, and how affected individuals can respond to the breach to protect themselves. A covered institution is not required to provide the notification if it determines that the sensitive customer information has not been, and is not reasonably likely to be, used in a manner that would result in substantial harm or inconvenience. To the extent a covered institution will have a notification obligation under both the final amendments and a similar state law, a covered institution may be able to provide one notice to satisfy notification obligations under both the final amendments and the state law, provided that the notice includes all information required under both the final amendments and the state law, which may reduce the number of notices an individual receives.

Recordkeeping

Covered institutions will have to make and maintain the following in their books and records:

  • Written policies and procedures required to be adopted and implemented pursuant to the Safeguards Rule, including the incident response program;
  • Written documentation of any detected unauthorized access to or use of customer information, as well as any response to and recovery from such unauthorized access to or use of customer information required by the incident response program;
  • Written documentation of any investigation and determination made regarding whether notification to customers is required, including the basis for any determination made and any written documentation from the United States Attorney General related to a delay in notice, as well as a copy of any notice transmitted following such determination;
  • Written policies and procedures required as part of service provider oversight;
  • Written documentation of any contract entered into pursuant to the service provider oversight requirements; and
  • Written policies and procedures required to be adopted and implemented for the Disposal Rule.

Registered investment advisers will be required to preserve these records for five years, the first two in an easily accessible place.

On July 1, 2024, Texas May Have the Strongest Consumer Data Privacy Law in the United States

It’s Bigger. But is it Better?

They say everything is bigger in Texas which includes big privacy protection. After the Texas Senate approved HB 4 — the Texas Data Privacy and Security Act (“TDPSA”), on June 18, 2023, Texas became the eleventh state to enact comprehensive privacy legislation.[1]

Like many state consumer data privacy laws enacted this year, TDPSA is largely modeled after the Virginia Consumer Data Protection Act.[2] However, the law contains several unique differences and drew significant pieces from recently enacted consumer data privacy laws in Colorado and Connecticut, which generally include “stronger” provisions than the more “business-friendly” laws passed in states like Utah and Iowa.

Some of the more notable provisions of the bill are described below:

More Scope Than You Can Shake a Stick At!

  • The TDPSA applies much more broadly than any other pending or effective state consumer data privacy act, pulling in individuals as well as businesses regardless of their revenues or the number of individuals whose personal data is processed or sold.
  • The TDPSA applies to any individual or business that meets all of the following criteria:
    • conduct business in Texas (or produce goods or services consumed in Texas) and,
    •  process or sell personal data:
      • The “processing or sale of personal data” further expands the applicability of the TDPSA to include individuals and businesses that engage in any operations involving personal data, such as the “collection, use, storage, disclosure, analysis, deletion, or modification of personal data.”
      • In short, collecting, storing or otherwise handling the personal data of any resident of Texas, or transferring that data for any consideration, will likely meet this standard.
  • Uniquely, the carveout for “small businesses” excludes from coverage those entities that meet the definition of “a small business as defined by the United States Small Business Administration.”[3]
  • The law requires all businesses, including small businesses, to obtain opt-in consent before processing sensitive personal data.
  • Similar to other state comprehensive privacy laws, TDPSA excludes state agencies or political subdivisions of Texas, financial institutions subject to Title V of the Gramm-Leach-Bliley Act, covered entities and business associates governed by HIPAA, nonprofit organizations, and institutions of higher education. But, TDPSA uniquely excludes electric utilities, power generation companies, and retail electric providers, as defined under Section 31.002 of the Texas Utilities Code.
  • Certain categories of information are also excluded, including health information protected by HIPAA or used in connection with human clinical trials, and information covered by the Fair Credit Reporting Act, the Driver’s Privacy Protection Act, the Family Educational Rights and Privacy Act of 1974, the Farm Credit Act of 1971, emergency contact information used for emergency contact purposes, and data necessary to administer benefits.

Don’t Mess with Texas Consumers

Texas’s longstanding libertarian roots are evidenced in the TDPSA’s strong menu of individual consumer privacy rights, including the right to:

  • Confirm whether a controller is processing the consumer’s personal data and accessing that data;
  • Correct inaccuracies in the consumer’s personal data, considering the nature of the data and the purposes of the processing;
  • Delete personal data provided by or obtained about the consumer;
  • Obtain a copy of the consumer’s personal data that the consumer previously provided to a controller in a portable and readily usable format, if the data is available digitally and it is technically feasible; and
  • Opt-out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces legal or similarly significant legal effects concerning the consumer.

Data controllers are required to respond to consumer requests within 45 days, which may be extended by 45 days when reasonably necessary. The bill would also give consumers a right to appeal a controller’s refusal to respond to a request.

Controller Hospitality

The Texas bill imposes a number of obligations on data controllers, most of which are similar to other state consumer data privacy laws:

  • Data Minimization – Controllers should limit data collection to what is “adequate, relevant, and reasonably necessary” to achieve the purposes of collection that have been disclosed to a consumer. Consent is required before processing information in ways that are not reasonably necessary or not compatible with the purposes disclosed to a consumer.
  • Nondiscrimination – Controllers may not discriminate against a consumer for exercising individual rights under the TDPSA, including by denying goods or services, charging different rates, or providing different levels of quality.
  • Sensitive Data – Consent is required before processing sensitive data, which includes personal data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, citizenship or immigration status, genetic or biometric data processed for purposes of uniquely identifying an individual; personal data collected from a child known to be under the age of 13, and precise geolocation data.
    • The Senate version of the bill excludes data revealing “sexual orientation” from the categories of sensitive information, which differs from all other state consumer data privacy laws.
  • Privacy Notice – Controllers must post a privacy notice (e.g. website policy) that includes (1) the categories of personal data processed by the controller (including any sensitive data), (2) the purposes for the processing, (3) how consumers may exercise their individual rights under the Act, including the right of appeal, (4) any categories of personal data that the controller shares with third parties and the categories of those third parties, and (5) a description of the methods available to consumers to exercise their rights (e.g., website form or email address).
  • Targeted Advertising – A controller that sells personal data to third parties for purposes of targeted advertising must clearly and conspicuously disclose to consumers their right to opt-out.

Assessing the Privacy of Texans

Unlike some of the “business-friendly” privacy laws in Utah and Iowa, the Texas bill requires controllers to conduct data protection assessments (“Data Privacy Protection Assessments” or “DPPAs) for certain types of processing that pose heightened risks to consumers. The assessments must identify and weigh the benefits of the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the consumer as mitigated by any safeguards that could reduce those risks. In Texas, the categories that require assessments are identical to those required by Connecticut’s consumer data privacy law and include:

  • Processing personal data for targeted advertising;
  • The sale of personal data;
  • Processing personal data for profiling consumers, if such profiling presents a reasonably foreseeable risk to consumers of unfair or deceptive treatment, disparate impact, financial, physical or reputational injury, physical or other intrusion upon seclusion of private affairs, or “other substantial injury;”
  • Processing of sensitive data; and
  • Any processing activities involving personal data that present a “heightened risk of harm to consumers.”

Opting Out and About

Businesses are required to recognize a universal opt-out mechanism for consumers (or, Global Privacy Control signal), similar to provisions required in Colorado, Connecticut, California, and Montana, but it would also allow businesses more leeway to ignore those signals if it cannot verify the consumers’ identity or lacks the technical ability to receive it.

Show Me Some Swagger!

The Attorney General has the exclusive right to enforce the law, punishable by civil penalties of up to $7,500 per violation. Businesses have a 30-day right to cure violations upon written notice from the Attorney General. Unlike several other laws, the right to cure has no sunset provision and would remain a permanent part of the law. The law does not include a private right of action.

Next Steps for TDPSA Compliance

For businesses that have already developed a state privacy compliance program, especially those modeled around Colorado and Connecticut, making room for TDPSA will be a streamlined exercise. However, businesses that are starting from ground zero, especially “small businesses” defined in the law, need to get moving.

If TDPSA is your first ride in a state consumer privacy compliance rodeo, some first steps we recommend are:

  1. Update your website privacy policy for facial compliance with the law and make sure that notice is being given at or before the time of collection.
  2. Put procedures in place to respond to consumer privacy requests and ask for consent before processing sensitive information
  3. Gather necessary information to complete data protection assessments.
  4. Identify vendor contracts that should be updated with mandatory data protection terms.

Footnotes

[1] As of date of publication, there are now 17 states that have passed state consumer data privacy laws (California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Kentucky, Maryland, Massachusetts, Montana, New Jersey, New Hampshire, Tennessee, Texas, Utah, Virginia) and two (Vermont and Minnesota) that are pending.

[2] See, Code of Virginia Code – Chapter 53. Consumer Data Protection Act

[3] This is notably broader than other state privacy laws, which establish threshold requirements based on revenues or the amount of personal data that a business processes. It will also make it more difficult to know what businesses are covered because SBA definitions vary significantly from one industry vertical to another. As a quick rule of thumb, under the current SBA size standards, a U.S. business with annual average receipts of less than $2.25 million and fewer than 100 employees will likely be small, and therefore exempt from the TDPSA’s primary requirements.

For more news on State Privacy Laws, visit the NLR Consumer Protection and Communications, Media & Internet sections.