Small Businesses Don’t Recognize Risk of Cyberattack Despite Repeated Warnings

CNBC surveys over 2,000 small businesses each quarter to get their thoughts on the overall business environment and their small business’ health. According to the latest CNBC/SurveyMonkey Small Business Survey, despite repeated warnings by the Cybersecurity and Infrastructure Security Agency and the FBI that U.S.- based businesses are at an increased risk of a cyber-attack following Russia’s invasion of Ukraine, small business owners do not believe that it is an actual risk that will affect them, and they are not prepared for an attack. The latest survey shows that only five percent of small business owners reported cybersecurity to be the biggest risk to their company.

What is unfortunate, but not surprising, is the fact that this is the same percentage of small business owners who recognized a cyber attack as the biggest risk a year ago. There has been no change in the perception among business owners, even though there are repeated, dire warnings from the government. Also unfortunate is the statistic that only 33 percent of business owners with one to four employees are concerned about a cyber attack this year. In contrast, 61 percent of business owners with more than 50 employees have the same concern.

According to CNBC, “this general lack of concern among small business owners diverges from the sentiment among the general public….In SurveyMonkey’s polling, 55% of people in the U.S. say they would be less likely to continue to do business with brands who are victims of a cyber attack.” CNBC’s conclusion is that there is a disconnect between business owners’ appreciation of how much customers care about data security and that “[s]mall businesses that fail to take the cyber threat seriously risk losing customers, or much more, if a real threat emerges.” Statistics show that threat actors are targeting small to medium-sized businesses to stay under the law enforcement radar. With such a large target on their backs, business owners may wish to make cybersecurity a priority. It’s important to keep customers.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

DOJ Limits Application of Computer Fraud and Abuse Act, Providing Clarity for Ethical Hackers and Employees Paying Bills at Work Alike

On May 19, 2022, the Department of Justice announced it would not charge good-faith hackers who expose weaknesses in computer systems with violating the Computer Fraud and Abuse Act (CFAA or Act), 18 U.S.C. § 1030. Congress enacted the CFAA in 1986 to promote computer privacy and cybersecurity and amended the Act several times, most recently in 2008. However, the evolving cybersecurity landscape has left courts and commentators troubled by potential applications of the CFAA to circumstances unrelated to the CFAA’s original purpose, including prosecution of so-called “white hat” hackers. The new charging policy, which became effective immediately, seeks to advance the CFAA’s original purpose by clarifying when and how federal prosecutors are authorized to bring charges under the Act.

DOJ to Decline Prosecution of Good-Faith Security Research

The new policy exempts activity of white-hat hackers and states that “the government should decline prosecution if available evidence shows the defendant’s conduct consisted of, and the defendant intended, good-faith security research.” The policy defines “good-faith security research” as “accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”

In practice, this policy appears to provide, for example, protection from federal charges for the type of ethical hacking a St. Louis Post-Dispatch reporter performed in 2021. The reporter uncovered security flaws in a Missouri state website that exposed the Social Security numbers of over 100,000 teachers and other school employees. The Missouri governor’s office initiated an investigation into the reporter’s conduct for unauthorized computer access. While the DOJ’s policy would not affect prosecutions under state law, it would preclude federal prosecution for the conduct if determined to be good-faith security research.

The new policy also promises protection from prosecution for certain arguably common but contractually prohibited online conduct, including “[e]mbellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service.” Such activities resemble the facts of Van Buren v. United States, No. 19-783, which the Supreme Court decided in June 2021. In Van Buren, the 6-3 majority rejected the government’s broad interpretation of the CFAA’s prohibition on “unauthorized access” and held that a police officer who looked up license plate information on a law-enforcement database for personal use—in violation of his employer’s policy but without circumventing any access controls—did not violate the CFAA. The DOJ did not cite Van Buren as the basis for the new policy. Nor did the DOJ identify any another impetus for the change.

To Achieve More Consistent Application of Policy, All Federal Prosecutors Must Consult with Main Justice Before Bringing CFAA Charges

In addition to exempting good-faith security research from prosecution, the new policy specifies the steps for charging violations of the CFAA. To help distinguish between actual good-faith security research and pretextual claims of such research that mask a hacker’s malintent, federal prosecutors must consult with the Computer Crime and Intellectual Property Section (CCIPS) before bringing any charges. If CCIPS recommends declining charges, prosecutors must inform the Office of the Deputy Attorney General (DAG) and may need to obtain approval from the DAG before initiating charges.

©2022 Greenberg Traurig, LLP. All rights reserved.

Navigating the Data Privacy Landscape for Autonomous and Connected Vehicles: Implementing Effective Data Security

Autonomous vehicles can be vulnerable to cyber attacks, including those with malicious intent. Identifying an appropriate framework with policies and procedures will help mitigate the risk of a potential attack.

The National Highway Traffic Safety Administration (NHTSA) recommends a layered approach to reduce the likelihood of an attack’s success and mitigate ramifications if one does occur. NHTSA’s Cybersecurity Framework is structured around the five principles of identify, protect, detect, respond and recover, and can be used as a basis for developing comprehensive data security policies.

NHTSA goes on to describe how this approach “at the vehicle level” includes:

  • Protective/Preventive Measures and Techniques: These measures, such as isolation of safety-critical control systems networks or encryption, implement hardware and software solutions that lower the likelihood of a successful hack and diminish the potential impact of a successful hack.
  • Real-time Intrusion (Hacking) Detection Measures: These measures continually monitor signatures of potential intrusions in the electronic system architecture.
  • Real-time Response Methods: These measures mitigate the potential adverse effects of a successful hack, preserving the driver’s ability to control the vehicle.
  • Assessment of Solutions: This [analysis] involves methods such as information sharing and analysis of a hack by affected parties, development of a fix, and dissemination of the fix to all relevant stakeholders (such as through an ISAC). This layer ensures that once a potential vulnerability or a hacking technique is identified, information about the issue and potential solutions are quickly shared with other stakeholders.

Other industry associations are also weighing in on best practices, including the Automotive Information Sharing and Analysis Center’s (Auto-ISAC) seven Key Cybersecurity Functions and, from a technology development perspective, SAE International’s J3061, a Cybersecurity Guidebook for Cyber-Physical Vehicle Systems to help AV companies “[minimize] the exploitation of vulnerabilities that can lead to losses, such as financial, operational, privacy, and safety.”

© 2022 Varnum LLP

Comparing and Contrasting the State Laws: Does Pseudonymized Data Exempt Organizations from Complying with Privacy Rights?

Some organizations are confused as to the impact that pseudonymization has (or does not have) on a privacy compliance program. That confusion largely stems from ambiguity concerning how the term fits into the larger scheme of modern data privacy statutes. For example, aside from the definition, the CCPA only refers to “pseudonymized” on one occasion – within the definition of “research” the CCPA implies that personal information collected by a business should be “pseudonymized and deidentified” or “deidentified and in the aggregate.”[1] The conjunctive reference to research being both pseudonymized “and” deidentified raises the question whether the CCPA lends any independent meaning to the term “pseudonymized.” Specifically, the CCPA assigns a higher threshold of anonymization to the term “deidentified.” As a result, if data is already deidentified it is not clear what additional processing or set of operations is expected to pseudonymize the data. The net result is that while the CCPA introduced the term “pseudonymization” into the American legal lexicon, it did not give it any significant legal effect or status.

Unlike the CCPA, the pseudonymization of data does impact compliance obligations under the data privacy statutes of Virginia, Colorado, and Utah. As the chart below indicates, those statutes do not require that organizations apply access or deletion rights to pseudonymized data, but do imply that other rights (e.g., opt out of sale) do apply to such data. Ambiguity remains as to what impact pseudonymized data has on rights that are not exempted, such as the right to opt out of the sale of personal information. For example, while Virginia does not require an organization to re-identify pseudonymized data, it is unclear how an organization could opt a consumer out of having their pseudonymized data sold without reidentification.


ENDNOTES

[1] Cal. Civ. Code § 1798.140(ab)(2) (West 2021). It should be noted that the reference to pseudonymizing and deidentifying personal information is found within the definition of the word “Research,” as such it is unclear whether the CCPA was attempting to indicate that personal information will not be considered research unless it has been pseudonymized and deidentified, or whether the CCPA is mandating that companies that conduct research must pseudonymize and deidentify. Given that the reference is found within the definition section of the CCPA, the former interpretation seems the most likely intent of the legislature.

[2] The GDPR does not expressly define the term “sale,” nor does it ascribe particular obligations to companies that sell personal information. Selling, however, is implicitly governed by the GDPR as any transfer of personal information from one controller to a second controller would be considered a processing activity for which a lawful purpose would be required pursuant to GDPR Article 6.

[3] Va. Code 59.1-577(B) (2022).

[4] Utah Code Ann. 13-61-303(1)(a) (2022).

[5] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[6] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[7] Utah Code Ann. 13-61-303(1)(c) (exempting compliance with Utah Code Ann. 13-61-202(1) through (3)).

[8] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[9] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[10] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-573(A)(1) through (4)

[11] C.R.S. 6-1-1307(3) (2022) (exempting compliance with C.R.S. Section 6-1-1306(1)(b) to (1)(e)).

[12] Utah Code Ann. 13-61-303(1)(c) (exempting compliance with Utah Code Ann. 13-61-202(1) through (3)).

[13] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-574).

[14] Va. Code 59.1-577(D) (2022) (exempting compliance with Va. Code 59.1-574).

©2022 Greenberg Traurig, LLP. All rights reserved.

Navigating the Data Privacy Landscape for Autonomous and Connected Vehicles: Best Practices

Autonomous and connected vehicles, and the data they collect, process and store, create high demands for strong data privacy and security policies. Accordingly, in-house counsel must define holistic data privacy best practices for consumer and B2B autonomous vehicles that balance compliance, safety, consumer protections and opportunities for commercial success against a patchwork of federal and state regulations.

Understanding key best practices related to the collection, use, storage and disposal of data will help in-house counsel frame balanced data privacy policies for autonomous vehicles and consumers. This is the inaugural article in our series on privacy policy best practices related to:

  1. Data collection

  2. Data privacy

  3. Data security

  4. Monetizing data

Autonomous and Connected Vehicles: Data Protection and Privacy Issues

The spirit of America is tightly intertwined with the concept of personal liberty, including freedom to jump in a car and go… wherever the road takes you. As the famous song claims, you can “get your kicks on Route 66.” But today you don’t just get your kicks. You also get terabytes of data on where you went, when you left and arrived, how fast you traveled to get there, and more.

Today’s connected and semi-autonomous vehicles are actively collecting 100x more data than a personal smartphone, precipitating a revolution that will drive changes not just to automotive manufacturing, but to our culture, economy, infrastructure, legal and regulatory landscapes.

As our cars are becoming computers, the volume and specificity of data collected continues to grow. The future is now. Or at least, very near. Global management consultant McKinsey estimates “full autonomy with Level 5 technology—operating anytime, anywhere” as soon as the next decade.

This near-term future isn’t only for consumer automobiles and ride-sharing robo taxis. B2B industries, including logistics and delivery, agriculture, mining, waste management and more are pursuing connected and autonomous vehicle deployments.

In-house counsel must balance evolving regulations at the federal and state level, as well as consider cross-border and international regulations for global technologies. In the United States, the Federal Trade Commission (FTC) is the regulatory agency governing data privacy, alongside individual states that are developing their own regulations, with the California Consumer Privacy Act (CCPA) leading the way. Virginia and Colorado have new laws coming into effect in 2022, the California Privacy Rights Act comes into effect in 2023, and a half dozen more states are expected to enact new privacy legislation in the near future.

While federal and state regulations continue to evolve, mobility companies in the consumer and B2B mobility sectors need to make decisions today about their own data privacy and security policies in order to optimize compliance and consumer protection with opportunities for commercial success.

Understanding Types of Connected and Autonomous Vehicles

Autonomous, semi-autonomous, self-driving, connected and networked cars; in this developing category, these descriptions are often used interchangeably in leading business and industry publications. B2B International defines “connected vehicles (CVs) [as those that] use the latest technology to communicate with each other and the world around them” whereas “autonomous vehicles (AVs)… are capable of recognizing their environment via the use of on-board sensors and global positioning systems in order to navigate with little or no human input. Examples of autonomous vehicle technology already in action in many modern cars include self-parking and auto-collision avoidance systems.”

But SAE International and the National Highway Traffic Safety Administration (NHTSA) go further, defining five levels of automation in self-driving cars.

Levels of Driving Automation™ in Self-Driving Cars

 

 

Level 3 and above autonomous driving is getting closer to reality every day because of an array of technologies, including: sensors, radar, sonar, lidar, biometrics, artificial intelligence and advanced computing power.

Approaching a Data Privacy Policy for Connected and Autonomous Vehicles

Because the mobility tech ecosystem is so dynamic, many companies, though well intentioned, inadvertently start with insufficient data privacy and security policies for their autonomous vehicle technology. The focus for these early and second stage companies is on bringing a product to market and, when sales accelerate, there is an urgent need to ensure their data privacy policies are comprehensive and compliant.

Whether companies are drafting initial policies or revising existing ones, there are general data principles that can guide policy development across the lifecycle of data:

Collect

Use

Store

Dispose

Only collect the data you need

Only use data for the reason you informed the consumer

Ensure reasonable data security protections are in place

Dispose the data when it’s no longer needed

Additionally, for many companies, framing autonomous and connected vehicle data protection and privacy issues through a safety lens can help determine the optimal approach to constructing policies that support the goals of the business while satisfying federal and state regulations.

For example, a company that monitors driver alertness (critical for safety in today’s Level 2 AV environment) through biometrics is, by design, collecting data on each driver who uses the car. This scenario clearly supports vehicle and driver safety while at the same time implicates U.S. data privacy law.

In the emerging regulatory landscape, in-house counsel will continue to be challenged to balance safety and privacy. Biometrics will become even more prevalent in connection to identification and authentication, along with other driver-monitoring technologies for all connected and autonomous vehicles, but particularly in relation to commercial fleet deployments.

Developing Best Practices for Data Privacy Policies

In-house counsel at autonomous vehicle companies are responsible for constructing their company’s data privacy and security policies. Best practices should be set around:

  • What data to collect and when

  • How collected data will be used

  • How to store collected data securely

  • Data ownership and monetization

Today, the CCPA sets the standard for rigorous consumer protections related to data ownership and privacy. However, in this evolving space, counsel will need to monitor and adjust their company’s practices and policies to comply with new regulations as they continue to develop in the U.S. and countries around the world.

Keeping best practices related to the collection, use, storage and disposal of data in mind will help in-house counsel construct policies that balance consumer protections with safety and the commercial goals of their organizations.

A parting consideration may be opportunistic, if extralegal: companies that choose to advocate strongly for customer protections may be afforded a powerful, positive opportunity to position themselves as responsible corporate citizens.

© 2022 Varnum LLP
For more articles about transportation, visit the NLR Public Services, Infrastructure, Transportation section.

Utah Becomes Fourth U.S. State to Enact Consumer Privacy Law

On March 24, 2022, Utah became the fourth state in the U.S., following California, Virginia and Colorado, to enact a consumer data privacy law, the Utah Consumer Privacy Act (the “UCPA”). The UCPA resembles Virginia’s Consumer Data Protection Act (“VCDPA”) and Colorado’s Consumer Privacy Act (“CPA”), and, to a lesser extent, the California Consumer Privacy Act (as amended by the California Privacy Rights Act) (“CCPA/CPRA”). The UCPA will take effect on December 31, 2023.

The UCPA applies to a controller or processor that (1) conducts business in Utah or produces a product or service targeted to Utah residents; (2) has annual revenue of $25,000,000 or more; and (3) satisfies at least one of the following thresholds: (a) during a calendar year, controls or processes the personal data of 100,000 or more Utah residents, or (b) derives over 50% of its gross revenue from the sale of personal data, and controls or processes the personal data of 25,000 or more consumers.

As with the CPA and VCDPA, the UCPA’s protections apply only to Utah residents acting solely within their individual or household context, with an express exemption for individuals acting in an employment or commercial (B2B) context. Similar to the CPA and VCDPA, the UCPA contains exemptions for covered entities, business associates and protected health information subject to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), and financial institutions or personal data subject to the Gramm-Leach-Bliley Act (“GLB”). As with the CCPA/CPRA and VCDPA, the UCPA also exempts from its application non-profit entities.

In line with the CCPA/CPRA, CPA and VCDPA, the UCPA provides Utah consumers with certain rights, including the right to access their personal data, delete their personal data, obtain a copy of their personal data in a portable manner, opt out of the “sale” of their personal data, and opt out of “targeted advertising” (as each term is defined under the law). Notably, the UCPA adopts the VCDPA’s more narrow definition of “sale,” which is limited to the exchange of personal data for monetary consideration by a controller to a third party. Unlike the CCPA/CPRA, CPA and VCDPA, the UCPA will not provide Utah consumers with the ability to correct inaccuracies in their personal data. Also unlike the CPA and VCDPA, the UCPA will not require controllers to obtain prior opt-in consent to process “sensitive data” (i.e., racial or ethnic origin, religious beliefs, sexual orientation, citizenship or immigration status, medical or health information, genetic or biometric data, or geolocation data). It will, however, require controllers to first provide consumers with clear notice and an opportunity to opt out of the processing of his or her sensitive data. With respect to the processing of personal data “concerning a known child” (under age 13), controllers must process such data in accordance with the Children’s Online Privacy Protection Act. The UCPA will prohibit controllers from discriminating against consumers for exercising their rights.

In addition, the UCPA will require controllers to implement reasonable and appropriate data security measures, provide certain content in their privacy notices, and include specific language in contracts with processors.

Unlike the CCPA/CPRA, VCDPA and CPA, the UCPA will not require controllers to conduct data protection assessments prior to engaging in data processing activities that present a heightened risk of harm to consumers, or to conduct cybersecurity audits or risk assessments.

In line with existing U.S. state privacy laws, the UCPA does not provide for a private right of action. The law will be enforced by the Utah Attorney General.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Google to Launch Google Analytics 4 in an Attempt to Address EU Privacy Concerns

On March 16, 2022, Google announced the launch of its new analytics solution, “Google Analytics 4.” Google Analytics 4 aims, among other things, to address recent developments in the EU regarding the use of analytics cookies and data transfers resulting from such use.

Background

On August 17, 2020, the non-governmental organization None of Your Business (“NOYB”) filed 101 identical complaints with 30 European Economic Area data protection authorities (“DPAs”) regarding the use of Google Analytics by various companies. The complaints focused on whether the transfer of EU personal data to Google in the U.S. through the use of cookies is permitted under the EU General Data Protection Regulation (“GDPR”), following the Schrems II judgment of the Court of Justice of the European Union. Following these complaints, the French and Austrian DPAs ruled that the transfer of EU personal data from the EU to the U.S. through the use of the Google Analytics cookie is unlawful.

Google’s New Solution

According to Google’s press release, Google Analytics 4 “is designed with privacy at its core to provide a better experience for both our customers and their users. It helps businesses meet evolving needs and user expectations, with more comprehensive and granular controls for data collection and usage.”

The most impactful change from an EU privacy standpoint is that Google Analytics 4 will no longer store IP address, thereby limiting the data transfers resulting from the use of Google Analytics that were under scrutiny in the EU following the Schrems II ruling. It remains to be seen whether this change will ease EU DPAs’ concerns about Google Analytics’ compliance with the GDPR.

Google’s previous analytics solution, Universal Analytics, will no longer be available beginning July 2023. In the meantime, companies are encouraged to transition to Google Analytics 4.

Read Google’s press release.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Chinese APT41 Attacking State Networks

Although we are receiving frequent alerts from CISA and the FBI about the potential for increased cyber threats coming out of Russia, China continues its cyber threat activity through APT41, which has been linked to China’s Ministry of State Security. According to Mandiant, APT41 has launched a “deliberate campaign targeting U.S. state governments” and has successfully attacked at least six state government networks by exploiting various vulnerabilities, including Log4j.

According to Mandiant, although the Chinese-based hackers are kicked out of state government networks, they repeat the attack weeks later and keep trying to get in to the same networks via different vulnerabilities (a “re-compromise”). One such successful vulnerability that was utilized is the USAHerds zero-day vulnerability, which is a software that state agriculture agencies use to monitor livestock. When the intruders are successful in using the USAHerds vulnerability to get in to the network, they can then leverage the intrusion to migrate to other parts of the network to access and steal information, including personal information.

Mandiant’s outlook on these attacks is sobering:

“APT41’s recent activity against U.S. state governments consists of significant new capabilities, from new attack vectors to post-compromise tools and techniques. APT41 can quickly adapt their initial access techniques by re-compromising an environment through a different vector, or by rapidly operationalizing a fresh vulnerability. The group also demonstrates a willingness to retool and deploy capabilities through new attack vectors as opposed to holding onto them for future use. APT41 exploiting Log4J in close proximity to the USAHerds campaign showed the group’s flexibility to continue targeting U.S state governments through both cultivated and co-opted attack vectors. Through all the new, some things remain unchanged: APT41 continues to be undeterred by the U.S. Department of Justice (DOJ) indictment in September 2020.

Both Russia and China continue to conduct cyber-attacks against both private and public networks in the U.S. and there is no indication that the attacks will subside anytime soon.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Securities Litigation: An Emerging Strategy to Hold Companies Accountable for Privacy Protections

A California federal judge rejected Zoom Video Communications, Inc.’s motion to dismiss securities fraud claims against it, and its CEO and CFO, for misrepresenting Zoom’s privacy protections. Although there have been a number of cases challenging inadequate privacy protections on consumer protection grounds in recent years, this decision shifts the spotlight to an additional front on which the battles for privacy protection may be fought:  the securities-litigation realm.

At issue were statements made by Zoom relating to the company’s privacy and encryption methods, including Zoom’s 2019 Registration Statement and Prospectus, which told investors the company offered “robust security capabilities, including end-to-end encryption.” Importantly, the prospectus was signed by Zoom’s CEO, Eric Yuan. The plaintiffs, a group of Zoom shareholders, brought suit arguing that end-to-end encryption means that only meeting participants and no other person, not even the platform provider, would be able to access the content. The complaint alleged that contrary to this statement, Zoom maintained access to the cryptographic keys that could allow it to access the unencrypted video and audio content of Zoom meetings.

The plaintiffs’ allegations are based on media reports of security issues relating to Zoom conferences early in the COVID-19 pandemic, as well as an April 2020 Zoom blog post in which Yuan stated that Zoom had “fallen short of the community’s  ̶ ̶  and our own  ̶ ̶  privacy and security expectations.”  In his post, Yuan linked to another Zoom executive’s post, which apologized for “incorrectly suggesting” that Zoom meetings used end-to-end encryption.

In their motion to dismiss, the defendants did not dispute that the company said it used end-to-end encryption.  Instead, they challenged plaintiffs’ falsity, scienter, and loss causation allegations – and all three attempts were rejected by the court.

First, as to falsity, the court did not buy the defendants’ argument that “end-to-end encryption” could have different meanings because a Zoom executive expressly acknowledged that the company had “incorrectly suggest[ed] that Zoom meetings were capable of using end-to-end encryption.”  Thus, the court found that the complaint did, in fact, plead the existence of materially false and misleading statements. The court also rejected the defendants’ argument that Yuan’s understanding of the term “end-to-end encryption” changed in a relevant way from the time he made the challenged representation to his later statements that Zoom’s usage was inconsistent with “the commonly accepted definition.” The court looked to Yuan’s advanced degree in engineering, his status as a “founding engineer” at WebEx, and that he had personally “led the effort to engineer Zoom Meetings’ platform and is named on several patents that specifically concern encryption techniques.”

Lastly, the court rebuffed the defendants’ attempt at undermining loss causation, finding that the plaintiffs had pled facts to plausibly suggest a causal connection between the defendants’ allegedly fraudulent conduct and the plaintiffs’ economic loss. In particular, the court referenced the decline in Zoom’s stock price shortly after defendants’ fraud was revealed to the market via media reports and Yuan’s blog post.

That said, the court dismissed the plaintiffs’ remaining claims, as they related to data privacy statements made by Zoom or, in general, by the “defendants,” unlike the specific encryption-related statement made by Yuan. The court found that the corporate-made statements did not rise to the level of an “exceptional case where a company’s public statements were so important and so dramatically false that they would create a strong inference that at least some corporate officials knew of the falsity upon publication.” Because those statements were not coupled with sufficient allegations of individual scienter, the court granted the defendants’ motion to dismiss those statements from the complaint.

© 2022 Proskauer Rose LLP.
For more articles about business litigation, visit the NLR Litigation section.

GDPR Privacy Rules: The Other Shoe Drops

Four years after GDPR was implemented, we are seeing the pillars of the internet business destroyed. Given two new EU decisions affecting the practical management of data, all companies collecting consumer data in the EU are re-evaluating their business models and will soon be considering wholesale changes.

On one hand, the GDPR is creating the world its drafters intended – a world where personal data is less of a commodity exploited and traded by business. On the other hand, GDPR enforcement has taken the form of a wrecking ball, leading to data localization in Europe and substitution of government meddling for consumer choice.

For years we have watched the EU courts and enforcement agencies apply GDPR text to real-life cases, wondering if the legal application would be more of a nip and tuck operation on ecommerce or something more bloody and brutal. In 2022, we received our answer, and the bodies are dropping.

In January Austrian courts decided that companies can’t use Google Analytics to study their own site’s web traffic. The same conclusion was reached last week by French regulators. While Google doesn’t announce statistics about product usage, website tracker BuiltWith published that 29.3 million websites use Google Analytics, including 69.5 percent of Quantcast’s Top 10,000 sites, and that is more than ten times the next most popular option. So vast numbers of companies operating in Europe will need to change their platform analytics provider – if the Euro-crats will allow them to use site analytics at all.

But these decisions were not based on the functionality of Google Analytics, a tool that does not even capture personally identifiable information – no names, no home or office address, no phone numbers. Instead, these decisions that will harm thousands of businesses were a result of the Schrems II decision, finding fault in the transfer of this non-identifiable data to a company based in the United States. The problem here for European decision-makers is that US law enforcement may have access to this data if courts allow them. I have written before about this illogical conclusion and won’t restate the many arguments here, other than to say that EU law enforcement behaves the same way.

The effects of this decision will be felt far beyond the huge customer base of Google Analytics.  The logic of this decision effectively means that companies collecting data from EU citizens can no longer use US-based cloud services like Amazon Web Services, IBM, Google, Oracle or Microsoft. I would anticipate that huge cloud player Alibaba Cloud could suffer the same proscription if Europe’s privacy panjandrums decide that China’s privacy protection is as threatening as the US.

The Austrians held that all the sophisticated measures taken by Google to encrypt analytic data meant nothing, because if Google could decrypt it, so could the US government. By this logic, no US cloud provider – the world’s primary business data support network – could “safely” hold EU data. Which means that the Euro-crats are preparing to fine any EU company that uses a US cloud provider. Max Schrems saw this decision in stark terms, stating, “The bottom line is: Companies can’t use US cloud services in Europe anymore.”

This decision will ultimately support the Euro-crats’ goal of data localization as companies try to organize local storage/processing solutions to avoid fines. Readers of this blog have seen coverage of the EU’s tilt toward data localization (for example, here and here) and away from the open internet that European politicians once held as the ideal. The Euro-crats are taking serious steps toward forcing localized data processing and cutting US businesses out of the ecommerce business ecosystem. The Google Analytics decision is likely to be seen as a tipping point in years to come.

In a second major practical online privacy decision, earlier this month the Belgian Data Protection Authority ruled that the Interactive Advertising Bureau Europe’s Transparency and Consent Framework (TCF), a widely-used technical standard built for publishers, advertisers, and technology vendors to obtain user consent for data processing, does not comply with the GDPR. The TCF allows users to accept or reject cookie-based advertising, relieving websites of the need to create their own expensive technical solutions, and creating a consistent experience for consumers. Now the TCF is considered per-se illegal under EU privacy rules, casting thousands of businesses to search for or design their own alternatives, and removing online choices for European residents.

The Belgian privacy authority reached this conclusion by holding that the Interactive Advertising Bureau was a “controller” of all the data managed under its proposed framework. As stated by the Center for Data Innovation, this decision implies “that any good-faith effort to implement a common data protection protocol by an umbrella organization that wants to uphold GDPR makes said organization liable for the data processing that takes place under this protocol.” No industry group will want to put itself in this position, leaving businesses to their own devices and making ecommerce data collection much less consistent and much more expensive – even if that data collection is necessary to fulfill the requests of consumers.

For years companies thought that informed consumer consent would be a way to personalize messaging and keep consumer costs low online, but the EU has thrown all online consent regimes into question. EU regulators have effectively decided that people can’t make their own decisions about allowing data to be collected. If TCF – the consent system used by 80% of the European internet and a system designed specifically to meet the demands of the GDPR – is now illegal, then, for a second time in a month, all online consumer commerce is thrown into confusion. Thousands were operating websites with TCF and Google Analytics, believing they were following the letter of the law.  That confidence has been smashed.

We are finally seeing the practical effects of the GDPR beyond its simple utility for fining US tech companies.  Those effects are leading to a closed-border internet around Europe and a costlier, less customizable internet for EU citizens. The EU is clearly harming businesses around the world and making its internet a more cramped place. I have trouble seeing the logic and benefit of these decisions, but the GDPR was written to shake the system, and privacy benefits may emerge.

Copyright © 2022 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more articles about international privacy, visit the NLR Cybersecurity, Media & FCC section.