PRIVACY ON ICE: A Chilling Look at Third-Party Data Risks for Companies

An intelligent lawyer could tackle a problem and figure out a solution. But a brilliant lawyer would figure out how to prevent the problem to begin with. That’s precisely what we do here at Troutman Amin. So here is the latest scoop to keep you cool. A recent case in the United States District Court for the Northern District of California, Smith v. Yeti Coolers, L.L.C., No. 24-cv-01703-RFL, 2024 U.S. Dist. LEXIS 194481 (N.D. Cal. Oct. 21, 2024), addresses complex issues surrounding online privacy and the liability of companies who enable third parties to collect and use consumer data without proper disclosures or consent.

Here, Plaintiff alleged that Yeti Coolers (“Yeti”) used a third-party payment processor, Adyen, that collected customers’ personal and financial information during transactions on Yeti’s website. Plaintiff claimed Adyen then stored this data and used it for its own commercial purposes, like marketing fraud prevention services to merchants, without customers’ knowledge or consent. Alarm bells should be sounding off in your head—this could signal a concerning trend in data practices.

Plaintiff sued Yeti under the California Invasion of Privacy Act (“CIPA”) for violating California Penal Code Sections 631(a) (wiretapping) and 632 (recording confidential communications). Plaintiff also brought a claim under the California Constitution for invasion of privacy. The key question here was whether Yeti could be held derivatively liable for Adyen’s alleged wrongful conduct.

So, let’s break this down step by step.

Under the alleged CIPA Section 631(a) violation, the court found that Plaintiff plausibly alleged Adyen violated this Section by collecting customer data as a third-party eavesdropper without proper consent. In analyzing whether Yeti’s Privacy Policy and Terms of Use constituted enforceable agreements, it applied the legal frameworks for “clickwrap” and “browsewrap” agreements.

Luckily, my Contracts professor during law school here in Florida was remarkable, Todd J. Clark, now the Dean of Widner University Delaware Law School. For those who snoozed out during Contracts class during law school, here is a refresher:

Clickwrap agreements present the website’s terms to the user and require the user to affirmatively click an “I agree” button to proceed. Browsewrap agreements simply post the terms via a hyperlink at the bottom of the webpage. For either type of agreement to be enforceable, the Court explained that a website must provide 1) reasonably conspicuous notice of the terms and 2) require some action unambiguously manifesting assent. See Oberstein v. Live Nation Ent., Inc., 60 F.4th 505, 515 (9th Cir. 2023).

The Court held that while Yeti’s pop-up banner and policy links were conspicuous, they did not create an enforceable clickwrap agreement because “Defendant’s pop-up banner does not require individuals to click an “I agree” button, nor does it include any language to imply that by proceeding to use the website, users reasonably consent to Defendant’s terms and conditions of use.” See Smith, 2024 U.S. Dist. LEXIS 194481, at *8. The Court also found no enforceable browsewrap agreement was formed because although the policies were conspicuously available, “Defendant’s website does not require additional action by users to demonstrate assent and does not conspicuously notify them that continuing to use to website constitutes assent to the Privacy Policy and Terms of Use.” Id. at *9.

What is more, the Court relied on Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1179 (9th Cir. 2014), which held that “where a website makes its terms of use available via a conspicuous hyperlink on every page of the website but otherwise provides no notice to users nor prompts them to take any affirmative action to demonstrate assent, even close proximity of the hyperlink to relevant buttons users must click on—without more—is insufficient to give rise to constructive notice.” Here, the Court found the pop-up banner and link on Yeti’s homepage presented the same situation as in Nguyen and thus did not create an enforceable browsewrap agreement.

Thus, the Court dismissed the Section 631(a) claim due to insufficient allegations that Yeti was aware of Adyen’s alleged violations.

However, the Court held that to establish Yeti’s derivative liability for “aiding” Adyen under Section 631(a), Plaintiff had to allege facts showing Yeti acted with both knowledge of Adyen’s unlawful conduct and the intent or purpose to assist it. It found Plaintiff’s allegations that Yeti was “aware of the purposes for which Adyen collects consumers’ sensitive information because Defendant is knowledgeable of and benefitting from Adyen’s fraud prevention services” and “assists Adyen in intercepting and indefinitely storing this sensitive information” were too conclusory. Smith, 2024 U.S. Dist. LEXIS 194481, at *13. It reasoned: “Without further information, the Court cannot plausibly infer from Defendant’s use of Adyen’s fraud prevention services alone that Defendant knew that Adyen’s services were based on its allegedly illegal interception and storing of financial information, collected during Adyen’s online processing of customers’ purchases.” Id.

Next, the Court similarly found that Plaintiff plausibly alleged Adyen recorded a confidential communication without consent in violation of CIPA Section 632. A communication is confidential under this section if a party “has an objectively reasonable expectation that the conversation is not being overheard or recorded.” Flanagan v. Flanagan, 27 Cal. 4th 766, 776-77 (2002). It explained that “[w]hether a party has a reasonable expectation of privacy is a context-specific inquiry that should not be adjudicated as a matter of law unless the undisputed material facts show no reasonable expectation of privacy.” Smith, 2024 U.S. Dist. LEXIS 194481, at *18-19. At the pleading stage, the Court found Plaintiff’s allegation that she reasonably expected her sensitive financial information would remain private was sufficient.

However, as with the Section 631(a) claim, the Court held that Plaintiff did not plead facts establishing Yeti’s derivative liability under the standard for aiding and abetting liability. Under Saunders v. Superior Court, 27 Cal. App. 4th 832, 846 (1994), the Court explained a defendant is liable if they a) know the other’s conduct is wrongful and substantially assist them or b) substantially assist the other in accomplishing a tortious result and the defendant’s own conduct separately breached a duty to the plaintiff. The Court found that the Complaint lacked sufficient non-conclusory allegations that Yeti knew or intended to assist Adyen’s alleged violation. See Smith, 2024 U.S. Dist. LEXIS 194481, at *16.

Lastly, the Court analyzed Plaintiff’s invasion of privacy claim under the California Constitution using the framework from Hill v. Nat’l Coll. Athletic Ass’n, 7 Cal. 4th 1, 35-37 (1994). For a valid invasion of privacy claim, Plaintiff had to show 1) a legally protected privacy interest, 2) a reasonable expectation of privacy under the circumstances, and 3) a serious invasion of privacy constituting “an egregious breach of the social norms.” Id.

The Court found Plaintiff had a protected informational privacy interest in her personal and financial data, as “individual[s] ha[ve] a legally protected privacy interest in ‘precluding the dissemination or misuse of sensitive and confidential information.”‘ Smith, 2024 U.S. Dist. LEXIS 194481, at *17. It also found Plaintiff plausibly alleged a reasonable expectation of privacy at this stage given the sensitivity of financial data, even if “voluntarily disclosed during the course of ordinary online commercial activity,” as this presents “precisely the type of fact-specific inquiry that cannot be decided on the pleadings.” Id. at *19-20.

Conversely, the Court found Plaintiff did not allege facts showing Yeti’s conduct was “an egregious breach of the social norms” rising to the level of a serious invasion of privacy, which requires more than “routine commercial behavior.” Id. at *21. The Court explained that while Yeti’s simple use of Adyen for payment processing cannot amount to a serious invasion of privacy, “if Defendant was aware of Adyen’s usage of the personal information for additional purposes, this may present a plausible allegation that Defendant’s conduct was sufficiently egregious to survive a Motion to Dismiss.” Id. However, absent such allegations about Yeti’s knowledge, this claim failed.

In the end, the Court dismissed Plaintiff’s Complaint but granted leave to amend to correct the deficiencies, so this case may not be over. The Court’s grant of “leave to amend” signals that if Plaintiff can sufficiently allege Yeti’s knowledge of or intent to facilitate Adyen’s use of customer data, these claims could proceed. As companies increasingly rely on third parties to handle customer data, we will likely see more litigation in this area, testing the boundaries of corporate liability for data privacy violations.

So, what is the takeaway? As a brilliant lawyer, your company’s goal should be to prevent privacy pitfalls before they snowball into costly litigation. Key things to keep in mind are 1) ensure your privacy policies and terms of use are properly structured as enforceable clickwrap or browsewrap agreements, with conspicuous notice and clear assent mechanisms; 2) conduct thorough due diligence on third-party service providers’ data practices and contractual protections; 3) implement transparent data collection and sharing disclosures for informed customer consent; and 4) stay abreast of evolving privacy laws.

In essence, taking these proactive steps can help mitigate the risks of derivative liability for third-party misconduct and, most importantly, foster trust with your customers.

House Committee Postpones Markup Amid New Privacy Bill Updates

On June 27, 2024, the U.S. House of Representatives cancelled the House Energy and Commerce Committee markup of the American Privacy Rights Act (“APRA” or “Bill”) scheduled for that day, reportedly with little notice. There has been no indication of when the markup will be rescheduled; however, House Energy and Commerce Committee Chairwoman Cathy McMorris Rodgers issued a statement reiterating her support for the legislation.

On June 20, 2024, the House posted a third version of the discussion draft of the APRA. On June 25, 2024, two days before the scheduled markup session, Committee members introduced the APRA as a bill, H.R. 8818. Each version featured several key changes from earlier drafts, which are outlined collectively, below.

Notable changes in H.R. 8818 include the removal of two key sections:

  • “Civil Rights and Algorithms,” which required entities to conduct covered algorithm impact assessments when algorithms posed a consequential risk of harm to individuals or groups; and
  • “Consequential Decision Opt-Out,” which allowed individuals to opt out of being subjected to covered algorithms.

Additional changes include the following:

  • The Bill introduces new definitions, such as “coarse geolocation information” and “online activity profile,” the latter of which refines a category of sensitive data. “Neural data” and “information that reveals the status of an individual as a member of the Armed Forces” are added as new categories of sensitive data. The Bill also modifies the definitions of “contextual advertising” and “first-party advertising.”
  • The data minimization section includes a number of changes, such as the addition of “conduct[ing] medical research” in compliance with applicable federal law as a new permitted purpose. The Bill also limits the ability to rely on permitted purposes in processing sensitive covered data, biometric and genetic information.
  • The Bill now allows not only covered entities (excluding data brokers or large data holders), but also service providers (that are not large data holders) to apply for the Federal Trade Commission-approved compliance guideline mechanism.
  • Protections for covered minors now include a prohibition on first-party advertising (in addition to targeted advertising) if the covered entity knows the individual is a minor, with limited exceptions acknowledged by the Bill. It also restricts the transfer of a minor’s covered data to third parties.
  • The Bill adds another preemption clause, clarifying that APRA would preempt any state law providing protections for children or teens to the extent such laws conflict with the Bill, but does not prohibit states from enacting laws, rules or regulations that offer greater protection to children or teens than the APRA.

For additional information about the changes, please refer to the unofficial redline comparison of all APRA versions published by the IAPP.

The Double-Edged Impact of AI Compliance Algorithms on Whistleblowing

As the implementation of Artificial Intelligence (AI) compliance and fraud detection algorithms within corporations and financial institutions continues to grow, it is crucial to consider how this technology has a twofold effect.

It’s a classic double-edged technology: in the right hands it can help detect fraud and bolster compliance, but in the wrong it can snuff out would-be-whistleblowers and weaken accountability mechanisms. Employees should assume it is being used in a wide range of ways.

Algorithms are already pervasive in our legal and governmental systems: the Securities and Exchange Commission, a champion of whistleblowers, employs these very compliance algorithms to detect trading misconduct and determine whether a legal violation has taken place.

There are two major downsides to the implementation of compliance algorithms that experts foresee: institutions avoiding culpability and tracking whistleblowers. AI can uncover fraud but cannot guarantee the proper reporting of it. This same technology can be used against employees to monitor and detect signs of whistleblowing.

Strengths of AI Compliance Systems:

AI excels at analyzing vast amounts of data to identify fraudulent transactions and patterns that might escape human detection, allowing institutions to quickly and efficiently spot misconduct that would otherwise remain undetected.

AI compliance algorithms are promised to operate as follows:

  • Real-time Detection: AI can analyze vast amounts of data, including financial transactions, communication logs, and travel records, in real-time. This allows for immediate identification of anomalies that might indicate fraudulent activity.
  • Pattern Recognition: AI excels at finding hidden patterns, analyzing spending habits, communication patterns, and connections between seemingly unrelated entities to flag potential conflicts of interest, unusual transactions, or suspicious interactions.
  • Efficiency and Automation: AI can automate data collection and analysis, leading to quicker identification and investigation of potential fraud cases.

Yuktesh Kashyap, associate Vice President of data science at Sigmoid explains on TechTarget that AI allows financial institutions, for example, to “streamline compliance processes and improve productivity. Thanks to its ability to process massive data logs and deliver meaningful insights, AI can give financial institutions a competitive advantage with real-time updates for simpler compliance management… AI technologies greatly reduce workloads and dramatically cut costs for financial institutions by enabling compliance to be more efficient and effective. These institutions can then achieve more than just compliance with the law by actually creating value with increased profits.”

Due Diligence and Human Oversight

Stephen M. Kohn, founding partner of Kohn, Kohn & Colapinto LLP, argues that AI compliance algorithms will be an ineffective tool that allow institutions to escape liability. He worries that corporations and financial institutions will implement AI systems and evade enforcement action by calling it due diligence.

“Companies want to use AI software to show the government that they are complying reasonably. Corporations and financial institutions will tell the government that they use sophisticated algorithms, and it did not detect all that money laundering, so you should not sanction us because we did due diligence.” He insists that the U.S. Government should not allow these algorithms to be used as a regulatory benchmark.

Legal scholar Sonia Katyal writes in her piece “Democracy & Distrust in an Era of Artificial Intelligence” that “While automation lowers the cost of decision making, it also raises significant due process concerns, involving a lack of notice and the opportunity to challenge the decision.”

While AI can be used as a powerful tool for identifying fraud, there is still no method for it to contact authorities with its discoveries. Compliance personnel are still required to blow the whistle, given societies standard due process. These algorithms should be used in conjunction with human judgment to determine compliance or lack thereof. Due process is needed so that individuals can understand the reasoning behind algorithmic determinations.

The Double-Edged Sword

Darrell West, Senior Fellow at Brookings Institute’s Center for Technology Innovation and Douglas Dillon Chair in Governmental Studies warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them.

Nowadays most office jobs (whether remote or in person) conduct operations fully online. Employees are required to use company computers and networks to do their jobs. Data generated by each employee passes through these devices and networks. Meaning, your privacy rights are questionable.

Because of this, whistleblowing will get much harder – organizations can employ the technology they initially implemented to catch fraud to instead catch whistleblowers. They can monitor employees via the capabilities built into our everyday tech: cameras, emails, keystroke detectors, online activity logs, what is downloaded, and more. West urges people to operate under the assumption that employers are monitoring their online activity.

These techniques have been implemented in the workplace for years, but AI automates tracking mechanisms. AI gives organizations more systematic tools to detect internal problems.

West explains, “All organizations are sensitive to a disgruntled employee who might take information outside the organization, especially if somebody’s dealing with confidential information, budget information or other types of financial information. It is just easy for organizations to monitor that because they can mine emails. They can analyze text messages; they can see who you are calling. Companies could have keystroke detectors and see what you are typing. Since many of us are doing our jobs in Microsoft Teams meetings and other video conferencing, there is a camera that records and transcribes information.”

If a company is defining a whistleblower as a problem, they can monitor this very information and look for keywords that would indicate somebody is engaging in whistleblowing.

With AI, companies can monitor specific employees they might find problematic (such as a whistleblower) and all the information they produce, including the keywords that might indicate fraud. Creators of these algorithms promise that soon their products will be able to detect all sorts of patterns and feelings, such as emotion and sentiment.

AI cannot determine whether somebody is a whistleblower, but it can flag unusual patterns and refer those patterns to compliance analysts. AI then becomes a tool to monitor what is going on within the organization, making it difficult for whistleblowers to go unnoticed. The risk of being caught by internal compliance software will be much greater.

“The only way people could report under these technological systems would be to go offline, using their personal devices or burner phones. But it is difficult to operate whistleblowing this way and makes it difficult to transmit confidential information. A whistleblower must, at some point, download information. Since you will be doing that on a company network, and that is easily detected these days.”

But the question of what becomes of the whistleblower is based on whether the compliance officers operate in support of the company or the public interest – they will have an extraordinary amount of information about the company and the whistleblower.

Risks for whistleblowers have gone up as AI has evolved because it is harder for them to collect and report information on fraud and compliance without being discovered by the organization.

West describes how organizations do not have a choice whether or not to use AI anymore: “All of the major companies are building it into their products. Google, Microsoft, Apple, and so on. A company does not even have to decide to use it: it is already being used. It’s a question of whether they avail themselves of the results of what’s already in their programs.”

“There probably are many companies that are not set up to use all the information that is at their disposal because it does take a little bit of expertise to understand data analytics. But this is just a short-term barrier, like organizations are going to solve that problem quickly.”

West recommends that organizations should just be a lot more transparent about their use of these tools. They should inform their employees what kind of information they are using, how they are monitoring employees, and what kind of software they use. Are they using detection? Software of any sort? Are they monitoring keystrokes?

Employees should want to know how long information is being stored. Organizations might legitimately use this technology for fraud detection, which might be a good argument to collect information, but it does not mean they should keep that information for five years. Once they have used the information and determined whether employees are committing fraud, there is no reason to keep it. Companies are largely not transparent about length of storage and what is done with this data and once it is used.

West believes that currently, most companies are not actually informing employees of how their information is being kept and how the new digital tools are being utilized.

The Importance of Whistleblower Programs:

The ability of AI algorithms to track whistleblowers poses a real risk to regulatory compliance given the massive importance of whistleblower programs in the United States’ enforcement of corporate crime.

The whistleblower programs at the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) respond to individuals who voluntarily report original information about fraud or misconduct.

If a tip leads to a successful enforcement action, the whistleblowers are entitled to 10-30% of the recovered funds. These programs have created clear anti-retaliation protections and strong financial incentives for reporting securities and commodities fraud.

Established in 2010 under the Dodd-Frank Act, these programs have been integral to enforcement. The SEC reports that whistleblower tips have led to over $6 billion in sanctions while the CFTC states that almost a third of its investigations stem from whistleblower disclosures.

Whistleblower programs, with robust protections for those who speak out, remain essential for exposing fraud and holding organizations accountable. This ensures that detected fraud is not only identified, but also reported and addressed, protecting taxpayer money, and promoting ethical business practices.

If AI algorithms are used to track down whistleblowers, their implementation would hinder these programs. Companies will undoubtedly retaliate against employees they suspect of blowing the whistle, creating a massive chilling effect where potential whistleblowers would not act out of fear of detection.

Already being employed in our institutions, experts believe these AI-driven compliance systems must have independent oversight for transparency’s sake. The software must also be designed to adhere to due process standards.

For more news on AI Compliance and Whistleblowing, visit the NLR Communications, Media & Internet section.

Five Compliance Best Practices for … Conducting a Risk Assessment

As an accompaniment to our biweekly series on “What Every Multinational Should Know About” various international trade, enforcement, and compliance topics, we are introducing a second series of quick-hit pieces on compliance best practices. Give us two minutes, and we will give you five suggested compliance best practices that will benefit your international regulatory compliance program.

Conducting an international risk assessment is crucial for identifying and mitigating potential risks associated with conducting business operations in foreign countries and complying with the expansive application of U.S. law. Because compliance is essentially an exercise in identifying, mitigating, and managing risk, the starting point for any international compliance program is to conduct a risk assessment. If your company has not done one within the last two years, then your organization probably should be putting one in motion.

Here are five compliance checks that are important to consider when conducting a risk assessment:

  1. Understand Business Operations: A good starting point is to gain a thorough understanding of the organization’s business operations, including products, services, markets, supply chains, distribution channels, and key stakeholders. You should pay special attention to new risk areas, including newly acquired companies and divisions, expansions into new countries, and new distribution patterns. Identifying the business profile of the organization, and how it raises systemic risks, is the starting point of developing the risk profile of the company.
  2. Conduct Country- and Industry-Specific Risk Factors: Analyze the political, economic, legal, and regulatory landscape of each country where the organization operates or plans to operate. Consider factors such as political stability, corruption levels, regulatory environment, and cultural differences. You should also understand which countries also raise indirect risks, such as for the transshipment of goods to sanctioned countries. You also should evaluate industry-specific risks and trends that may impact your company’s risk profile, such as the history of recent enforcement actions.
  3. Gather Risk-Related Data and Information: You should gather relevant data and information from internal and external sources to inform the risk-assessment process. Relevant examples include internal documentation, industry publications, reports of recent enforcement actions, and areas where government regulators are stressing compliance, such as the recent focus on supply chain factors. Use risk-assessment tools and methodologies to systematically evaluate and prioritize risks, such as risk matrices, risk heat maps, scenario analysis, and probability-impact assessments. (The Foley anticorruption, economic sanctions, and forced labor heat maps are found here.)
  4. Engage Stakeholders: Engage key stakeholders throughout the risk-assessment process to gather insights, perspectives, and feedback. Consult with local employees and business partners to gain feedback on compliance issues that are likely to arise while also seeking their aid in disseminating the eventual compliance dictates, internal controls, and other compliance measures that your organization ends up implementing or updating.
  5. Document Findings and Develop Risk-Mitigation Strategies: Document the findings of the risk assessment, including identified risks, their potential impact and likelihood, and recommended mitigation strategies. Ensure that documentation is clear, concise, and actionable. Use the documented findings to develop risk-mitigation strategies and action plans to address identified risks effectively while prioritizing mitigation efforts based on risk severity, urgency, and feasibility of implementation.

Most importantly, you should recognize that assessing and addressing risk is an ongoing process. You should ensure your organization has established processes for the ongoing monitoring and review of risks to track changes in the risk landscape and evaluate the effectiveness of mitigation measures. Further, at least once every two years, most multinational organizations should be updating their risk assessment periodically to reflect evolving risks and business conditions as well as changing regulations and regulator enforcement priorities.

Five Data Quality Nightmares That Haunt Marketers and How Avoid Them

In this spooky season of vampires, witches and scary clowns, we’d like to add one more to the mix – data quality nightmares – which can be more frightful than a marathon of Freddy Kreuger movies to some of us.

We need data about our clients and prospects in order to create strategic programs that can lead to new business and increased visibility, but maintaining that data on an ongoing basis can quickly turn into a nightmare without the right resources.

Having good quality data is important for success in so many areas of your organization, including:

  • Communicating effectively with core constituencies
  • Successfully planning and executing events
  • Segmenting your target markets, clients or customers
  • Providing superior customer service
  • Understanding the needs of clients or customers
  • Effectively developing new business
  • Improving delivery and reducing costs of postal mailings

The reality is that your data will never be perfect, but there are ways you can address and improve it. The longer you wait to improve your data management, the scarier it will become. Here are some of the most common data quality nightmares we see and how to avoid them:

Data Quality Nightmare 1: Duplicate data

Is your CRM a graveyard for thousands of duplicate company and individual contacts? Data comes from all directions, so it’s important to ensure that data isn’t being duplicated. Dupes make it difficult to coordinate efforts and activities. Duplicate data occurs when customer information appears more than once in the database, or multiple variations of the same individual appear.

Secondly, duplicate data can damage your brand image. It is unlikely that a contact who receives the same information twice will be happy about it. This is an easy way to frustrate customers and prospects and can make your business appear disorganized.

Data Quality Nightmare 2: Missing or incomplete data

Are your contact details ‘ghosting you’? Without good data you can’t target or segment, and your communications and invitations won’t reach the right audiences.

Similar to inaccurate data, incomplete data can also have a negative impact on your business performance.

One way that organizations can help control this data quality nightmare, is by making certain form fields a required entry. That way, data entries will be more consistent and complete.

Data Quality Nightmare 3: Incorrect or inconsistent data

Does incorrect or inconsistent data give you nightmares? Bad CRM data leads to missed opportunities for new customers, and it could create issues for your sales cycle. There is almost no point in engaging with contacts in your database if the information is incorrect.

There are multiple ways to encourage good data habits, depending on your system and method of contact entry. If your firm relies on manual data entry, implement a firmwide Data Standards Guide to inform users how data should be entered (e.g., does your firm spell out or abbreviate job titles?). It can also be helpful to use system validation rules wherever possible to require certain information in new records such as last name, city and email address to ensure your contacts are relevant.

Data Quality Nightmare 4: Too much data

Are you in the ‘zombie zone’ trying blindly to figure out what to do with too much data and/or disparate data from disconnected systems?

Having too much data can be overwhelming – and unnecessary. It’s important to set parameters on what information you truly need about your clients and prospects, and then maintain only that information going forward. This will streamline the process and make everyone’s jobs easier by avoiding data quality nightmares.

Data Quality Nightmare 5: Lack of data quality resources

Does your team run screaming from data quality projects leaving you with a data disaster?

To encourage ongoing system adoption and utilization, data quality and maintenance must be top priorities. Resources must be dedicated – including time, money and people. Processes and procedures need to be put in place to maintain ongoing quality. Most importantly, training and communication are essential to ensure that end users don’t create unnecessary duplicates or introduce more bad data into the system.

Data Quality Doesn’t Have to Be Scary

While it’s easy to become scared by nightmare data, it’s important to put it in perspective. Focus on discreet data and projects that yield real ROI such as:

  • Start with your most relevant records like current clients. Begin cleaning your top 100 to 500 along with associated key contacts.
  • Review frequently used lists to ensure your communications and invitations are reaching the right recipients.
  • Vet bounced emails after each campaign, or better yet, regularly run lists through an automated data process to identify bad emails before a campaign to ensure that information actually reaches your targets in a timely manner.
  • Tackle time-sensitive one-off projects. For instance, an upcoming event often provides a good opportunity to get users engaged in cleanup efforts, particularly if the event is important to them.

It’s also important to remember that because data degrades so rapidly, data cleaning can’t be a one-time initiative. Once your team begins regularly maintaining your data, the cleanup will get easier over time. And remember, because data cleaning never really ends, the good news is that this means you have forever to get better at it.

© Copyright 2022 CLIENTSFirst Consulting

The Do’s and Don’ts of Data Cleaning – Don’t Drown in Bad Data

Bad CRM data can compound exponentially, impacting marketing and business development. It’s essential to understand the scope of  your data problems and follow a plan for regular data cleaning.  

Have you ever heard the saying, “No man ever steps into the same river twice”? Because a river’s water is constantly flowing and changing, the water you step in today will be different from yesterday. The same is true for the data in your CRM system: people are constantly changing roles, relocating, retiring; companies are opening, closing, moving and merging.

On top of that, new data isn’t always entered correctly. As a result, a database with clean, correct information today will not necessarily be accurate tomorrow. Over time, this bad data can compound exponentially, resulting in ineffective marketing, events and communication campaigns because as your data degrades, you reach fewer members of your target audience.

For professional services firms, poor data quality in your CRM system can also translate into a decline in system adoption. Once your professionals see bad data, they won’t trust the system as a whole and ultimately may outright refuse to use it. This is why we stress the importance of ongoing data cleaning.

Data Cleaning Do’s and Don’ts

Simply put, data cleaning involves identifying incorrect, incomplete and/or dated data in your systems and correcting and enhancing it. If you have a large database with thousands, or hundreds of thousands, of records, the data quality process can seem daunting and overwhelming.

While there’s no magic bullet or quick fix for poor data quality, ignoring data problems until there’s a crisis is not a strategy. Good data quality requires ongoing effort that never ends. The good news is that this means you have forever to get better at it. So, start now. Begin by assessing the scope of your data quality issues. Then, because it’s not always cost-effective or even possible to clean all your data, start by focusing on the highest priority projects.

Identify and Prioritize Your Most Important Data

All contact records are not created equal. For instance, client data is typically more important than non-client data. Additionally, individuals who have recently subscribed to your communications or attended an event are more important than those who last interacted with your firm years ago. Whatever segmenting scenario you select, it’s important to find ways to divide your contact data into manageable pieces because it makes the process more manageable and allows you to better measure progress.

Eliminate Stagnant Records

Related to prioritizing your data, don’t be hesitant about removing records that have been inactive for an extended period. Search your system for contacts that have not been updated for a few years, are not related to or known by any of your professionals, are not clients or alumni, and have not opened a communication or invitation in two to three years. Chances are good these records are not only outdated but also may not be worth the resources it would take to update them. Identify these records and consider removing them from the system. Less mess in your database makes cleanup a bit more manageable.

Your Plan Is Your Life Preserver

Once you’ve prioritized subsets or segments of contacts, identifying and prioritizing your most common data errors can help you decide on the best way to tackle ongoing data cleaning. For example, if you have an important email that needs to be sent to clients, you need to focus on email addresses. Identify records that don’t have an email address, have incorrectly formatted email addresses or have bounced recently.

In addition, if there are contacts you haven’t sent a communication or invitation to for an extended period of time, it’s entirely likely that their email may no longer be valid. It’s important to regularly test emails on your lists because not doing so can cause you to be blacklisted by anti-spam entities or have your account blocked by your eMarketing provider.

Initial Cleaning Cycle

The best place to start your data cleaning cycle is with a contact and list verification and cleansing service such as TrueDQ. This service will evaluate your list data, identify potentially harmful “honeypot” email addresses and even automatically update many of your contacts with current, complete contact information. The data can then also be enhanced with additional missing information, such as industries and locations, to help with targeting and segmenting.

Rinse and Repeat

When one segment or list has been cleaned, move on to the next one – bearing in mind that what’s important on the next list may be different from the last one. For example, maybe you need to send a hard copy postal mailing, so it will be important to ensure the accuracy of physical mailing addresses rather than email addresses.

Bounces and Returns

One of the most common data quality failures at law and other professional services firms is ignoring bounced emails and returned hard copy mailings. Bounces and returns are real-time indicators that can help you keep on top of your data quality. Researching and correcting them is important because sometimes they involve important former clients who could potentially hire the firm again at their new company.

Returned hard mail will often include the forwarding address of the recipient, which should be corrected in your CRM. For emails, use a central email address to collect automatic email replies, since these frequently tell you when a recipient no longer works at an organization.

Ideally, data stewards should regularly review all bounces to take the onus off the professionals. However, it can also be helpful to generate reports on bounced communications and circulate them to professionals or their assistants who may be able to provide updated information – or will at least appreciate knowing which of their contacts have moved on or changed roles.

Finally, if your eMarketing and/or CRM system has a process for automatically isolating bounced records, be sure you have a reciprocal process that automatically reinstates bounced records when the email field is updated.

Prevent Invalid Data

There are multiple ways to encourage good data habits, depending on your system and method of contact entry. If your firm relies on manual data entry, implement a firmwide Data Standards Guide to inform users how data should be entered (e.g., does your firm spell out or abbreviate job titles?). It can also be helpful to use system validation rules wherever possible to require certain information in new records such as last name, city and email address to ensure your contacts are relevant.

Finally, regularly review newly added records for consistency and completeness. This process can reveal issues such as users who may require additional training on contact input best practices. It can also help to catch spam or other potentially dangerous entries that can sometimes flow into your database from online forms that are filled out by bots.

Never, Ever Stop

Just as rivers keep flowing, so does the data in your CRM system – and the data will always need cleaning to ensure that it is fresh. While this may feel like a relentless and burdensome task, never stop – just go with the flow –  because when you’re not regularly cleaning the data, your CRM “river” can become stagnant, and the more polluted it becomes, the longer the eventual cleanup will take.

© Copyright 2022 CLIENTSFirst Consulting

A Rule 37 Refresher – As Applied to a Ransomware Attack

Federal Rule of Civil Procedure 37(e) (“Rule 37”) was completely rewritten in the 2015 amendments.  Before the 2015 amendments, the standard was that a party could not generally be sanctioned for data loss as a result of the routine, good faith operation of its system. That rule didn’t really capture the reality of all of the potential scenarios related to data issues nor did it provide the requisite guidance to attorneys and parties.

The new rule added a dimension of reasonableness to preservation and a roadmap for analysis.  The first guidepost is whether the information should have been preserved. This rule is based upon the common law duty to preserve when litigation is likely. The next guidepost is whether the data loss resulted from a failure to take reasonable steps to preserve. The final guidepost is whether or not the lost data can be restored or replaced through additional discovery.  If there is data that should have been preserved, that was lost because of failure to preserve, and that can’t be replicated, then the court has two additional decisions to make: (1) was there prejudice to another party from the loss OR (2) was there an intent to deprive another party of the information.  If the former, the court may only impose measures “no greater than necessary” to cure the prejudice.  If the latter, the court may take a variety of extreme measures, including dismissal of the action. An important distinction was created in the rule between negligence and intention.

So how does a ransomware attack fit into the new analytical framework? A Special Master in MasterObjects, Inc. v. Amazon.com (U.S. Dist. Court, Northern District of California, March 13, 2022) analyzed Rule 37 in the context of a ransomware attack. MasterObjects was the victim of a well-documented ransomware attack, which precluded the companies access to data prior to 2016. The Special Master considered the declaration from MasterObjects which explained that, despite using state of the art cybersecurity protections, the firm was attacked by hackers in December 2020.  The hack rendered all the files/mailboxes inaccessible without a recovery key set by the attackers.  The hackers demanded a ransom and the company contacted the FBI.  Both the FBI and insurer advised them not to pay the ransom. Despite spending hundreds of hours attempting to restore the data, everything prior to 2016 was inaccessible.

Applying Rule 37, the Special Master stated that, at the outset, there is no evidence that any electronically stored information was “lost.”  The data still exists and, while access has been blocked, it can be accessed in the future if a key is provided or a technological work-around is discovered.

Even if a denial of access is construed to be a “loss,” the Special Master found no evidence in this record that the loss occurred because MasterObjects failed to take reasonable steps to preserve it. This step of the analysis, “failure to take reasonable steps to preserve,” is a “critical, basic element” to prove spoliation.

On the issue of prejudice, Amazon argued that “we can’t know what we don’t know” (related to missing documents).  The Special Master did not find Amazon’s argument persuasive. The Special Master concluded that Amazon’s argument cannot survive the adoption of Rule 37(e). “The rule requires affirmative proof of prejudice in the specific destruction at issue.”

Takeaways:

  1. If you are in a spoliation dispute, make sure you have the experts and evidence to prove or defend your case.

  2. When you are trying to prove spoliation, know the new test and apply it in your analysis (the Special Master noted that Amazon did not reference Rule 37 in its briefing).

  3. As a business owner, when it comes to cybersecurity, you must take reasonable and defensible efforts to protect your data.

©2022 Strassburger McKenna Gutnick & Gefsky

Italian Garante Bans Google Analytics

On June 23, 2022, Italy’s data protection authority (the “Garante”) determined that a website’s use of the audience measurement tool Google Analytics is not compliant with the EU General Data Protection Regulation (“GDPR”), as the tool transfers personal data to the United States, which does not offer an adequate level of data protection. In making this determination, the Garante joins other EU data protection authorities, including the French and Austrian regulators, that also have found use of the tool to be unlawful.

The Garante determined that websites using Google Analytics collected via cookies personal data including user interactions with the website, pages visited, browser information, operating system, screen resolution, selected language, date and time of page views and user device IP address. This information was transferred to the United States without the additional safeguards for personal data required under the GDPR following the Schrems II determination, and therefore faced the possibility of governmental access. In the Garante’s ruling, website operator Caffeina Media S.r.l. was ordered to bring its processing into compliance with the GDPR within 90 days, but the ruling has wider implications as the Garante commented that it had received many “alerts and queries” relating to Google Analytics. It also stated that it called upon “all controllers to verify that the use of cookies and other tracking tools on their websites is compliant with data protection law; this applies in particular to Google Analytics and similar services.”

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Throwing Out the Privacy Policy is a Bad Idea

The public internet has been around for about thirty years and consumers’ browser-based graphic-heavy experience has existed for about twenty-five years. In the early days, commercial websites operated without privacy policies.

Eventually, people started to realize that they were leaving trails of information online, and in the early ‘aughts the methods for business capturing and profiting from these trails became clear, although the actual uses of the data on individual sites was not clear. People asked for greater transparency from the sites they visited online, and in response received the privacy policy.

A deeply-flawed instrument, the website privacy policy purports to explain how information is gathered and used by a website owner, but most such policies are strangely both imprecise and too long, losing the average reader in a fog of legalese language and marginally relevant facts. Some privacy policies are intentionally obtuse because it doesn’t profit the website operator to make its methods obvious. Many are overly general, in part because the website company doesn’t want to change its policy every time it shifts business practices or vendor alliances. Many are just messy and poorly written.

Part of the reason that privacy policies are confusing is that data privacy is not a precise concept. The definition of data is context dependent. Data can mean the information about a transaction, information gathered from your browser visit (include where you were before and after the visit), information about you or your equipment, or even information derived by analysis of the other information. And we know that de-identified data can be re-identified in many cases, and that even a collection a generic data can lead to one of many ways to identify a person.

The definition of data is context dependent.

The definition of privacy is also untidy. An ecommerce company must capture certain information to fulfill an online order. In this era of connected objects, the company may continue to take information from the item while the consumer is using it. This is true for equipment from televisions to dishwashers to sex toys. The company likely uses this information internally to develop its products. It may use the data to market more goods or services to the consumer. It may transfer the information to other companies so they can market their products more effectively. The company may provide the information to the government. This week’s New Yorker devotes several pages to how the word “privacy” conflates major concepts in US law, including secrecy and autonomy,1 and is thus confusing to courts and public alike.

All of this is difficult to reflect in a privacy policy, even if the company has incentive to provide useful information to its customers.

Last month the Washington Post ran an article by Geoffrey Fowler that was subtitled “Let’s abolish reading privacy policies.” The article notes a 2019 Pew survey claiming that only 9 percent of Americans say they always read privacy policies. I would suggest that more than half of those Americans are lying. Almost no one always reads privacy policies upon first entering a website or downloading an app. That’s not even really what privacy policies are for.

Fowler shows why people do not read these policies. He writes, “As an experiment, I tallied up all of the privacy policies just for the apps on my phone. It totaled nearly 1 million words. “War and Peace” is about half as long. And that’s just my phone. Back in 2008, Lorrie Cranor, a professor of engineering and public policy at Carnegie Mellon University, and a colleague estimated that reading and consenting to all the privacy policies on websites Americans visit would take 244 hours per year.”

The length, complexity and opacity of online privacy policies are concerning. The best alleviation for this concern would not be to eliminate privacy policies, but to make them less instrumental in the most important decisions about descriptive data.

Limit companies’ use of data and we won’t need to fight through their privacy options.

Website owners should not be expected to write out privacy policies that are both sufficiently detailed and succinctly readable so that consumers can make meaningful choices about use of the data that describes them. This type of system forces a person to be responsible for her own data protection and takes the onus off of the company to limit its use of the data. It is like our current system of waste recycling – both ineffective and supported by polluters, because rather than forcing manufacturers to use more environmentally friendly packaging, it pushes consumers to deal with the problem at home, shifting the burden from industry to us.  Similarly, if the legislatures provided a set of simple rules for website operators – here is what you are allowed to do with personal data, and here is what you are not allowed to do with it – then no one would read privacy policies to make sure data about our transactions was spared the worst treatment. The worst treatment would be illegal.

State laws are moving in this direction, providing simpler rules restricting certain uses and transfers of personal data and sensitive data. We are early in the process, but if the trend continues regarding omnibus state privacy laws in the same manner that all states eventually passed data breach disclosure laws, then we can be optimistic and expect full coverage of online privacy rules for all Americans within a decade or so. But we shouldn’t need to wait for all states to comply.

Unlike the data breach disclosure laws which encourage companies to comply only with the laws relevant to their particular loss of data, omnibus privacy laws affect the way companies conduct the normal course of everyday business, so it will only take requirements in a few states before big companies start building their privacy rights recognition functions around the lowest common denominator. It will simply make economic sense for businesses to give every US customer the same rights as most protective state provides its residents. Why build 50 sets of rules when you don’t need to do so? The cost savings of maintaining only one privacy rights-recognition system will offset the cost of providing privacy rights to people in states who haven’t passed omnibus laws yet.

This won’t make privacy policies any easier to read, but it will become less important to read them. Then privacy policies can return to their core function, providing a record of how a company treats data. In other words, a reference document, rather than a set of choices inset into a pillow of legal terms.

We shouldn’t eliminate the privacy policy. We should reduce the importance of such polices, and limit their functions, reducing customer frustration with the privacy policy’s role in our current process. Limit companies’ use of data and we won’t need to fight through their privacy options.


ENDNOTES

1 Privacy law also conflates these meanings with obscurity in a crowd or in public.


Article By Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.

New UK IDTA and Addendum Come Into Force

The new UK International Data Transfer Agreement (“IDTA”) and Addendum to the new 2021 EU Standard Contract Clauses (“New EU SCCs”) are now in force (as of the 21 March 2022), providing much needed certainty for UK organisations transferring personal data to service providers and group companies based outside of the UK/EEA.

The IDTA and Addendum replace the old EU Standard Contractual Clauses  (“Old EU SCCs”) for use as a UK GDPR-compliant transfer tool for restricted transfers from the UK, which also enables UK data exporters to comply with the European Court of Justice’s ‘Schrems II’ judgement.

For new UK data transfer arrangements or where UK organisations are in the process of reviewing their existing arrangements, use of the new ITDA or Addendum would be the best option to seek to future proof against the need to replace them in 2 years’ time.

Where the data flows involve transfers of personal data from both the UK and the EU, the use of the Addendum alongside the New EU SCCs, will enable organisations to implement a more harmonised solution.

To view copies of the documents please follow the links below:

To read our previous blog post on this topic, click here.


Article By Francesca Fellowes of Squire Patton Boggs (US) LLP. Hannah-Mei Grisley also contributed to this article.

© Copyright 2022 Squire Patton Boggs (US) LLP