FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?

The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry.

The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content; 2) alert consumers to the potential that such voice calls are fraudulent and/or AI-generated; and 3) potentially block future voice calls that can be identified as similar AI-generated or otherwise fraudulent voice calls based on analytics.” (emphasis mine)

The FCC also wants to know “what steps can the Commission take to encourage the development and deployment of these technologies…”

The FCC does note there are “significant privacy risks, insofar as they appear to rely on analysis and processing of the content of calls.” The FCC also wants comments on “what protections exist for non-malicious callers who have a legitimate privacy interest in not having the contents of their calls collected and processed by unknown third parties?”

So, the Federal Communications Commission wants to monitor the CONTENT of voice calls. In real-time. On your device.

That’s not a problem for anyone else?

Sure, robocalls are bad. There are scams on robocalls.

But, are robocalls so bad that we need real-time monitoring of voice call content?

At what point, did we throw the Fourth Amendment out of the window and to prevent what? Phone calls??

The basic premise of the Fourth Amendment is “to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” I’m not sure how we get more arbitrary than “this incoming call is a fraud” versus “this incoming call is not a fraud”.

So, maybe you consent to this real-time monitoring. Sure, ok. But, can you actually give informed consent to what would happen with this monitoring?

Let me give you three examples of “pre-recorded calls” that the real-time monitoring could overhear to determine if the “voice calls are fraudulent and/or AI-generated”:

  1. Your phone rings. It’s a prerecorded call from Planned Parenthood confirming your appointment for tomorrow.
  2. Your phone rings. It’s an artificial voice recording from your lawyer’s office telling you that your criminal trial is tomorrow.
  3. Your phone rings. It’s the local jewelry store saying your ring is repaired and ready to be picked up.

Those are basic examples, but for them to someone to “detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content”, those calls have to be monitored in real-time. And stored somewhere. Maybe on your device. Maybe by a third-party in their cloud.

Maybe you trust Apple with that info. But, do you trust someone who comes up with fraudulent monitoring software that would harvest that data? How do you know you should trust that party?

Or you trust Google. Surely, Google wouldn’t use your personal data. Surely, they would not use your phone call history to sell ads.

And that becomes data a third-party can use. For ads. For political messaging. For profiling.

Yes, this is extremely conspiratorial. But, that doesn’t mean your data is not valuable. And where there is valuable data, there are people willing to exploit it.

Robocalls are a problem. And there are some legitimate businesses doing great things with fraud detection monitoring. But, a real-time monitoring edict from the government is not the solution. As an industry, we can be smarter on how we handle this.

Bidding Farewell, For Now: Google’s Ad Auction Class Certification Victory

A federal judge in the Northern District of California delivered a blow to a potential class action lawsuit against Google over its ad auction practices. The lawsuit, which allegedly involved tens of millions of Google account holders, claimed Google’s practices in its real-time bidding (RTB) auctions violated users’ privacy rights. But U.S. District Judge Yvonne Gonzalez Rogers declined to certify the class of consumers, pointing to deficiencies in the plaintiffs’ proposed class definition.

According to plaintiffs, Google’s RTB auctions share highly specific personal information about individuals with auction participants, including device identifiers, location data, IP addresses, and unique demographic and biometric data, including age and gender. This, the plaintiffs argued, directly contradicted Google’s promises to protect users’ data. The plaintiffs therefore proposed a class definition that included all Google account holders subject to the company’s U.S. terms of service whose personal information was allegedly sold or shared by Google in its ad auctions after June 28, 2016.

But Google challenged this definition on the basis that it “embed[ded] the concept of personal information” and therefore subsumed a dispositive issue on the merits, i.e., whether Google actually shared account holders’ personal information. Google argued that the definition amounted to a fail-safe class since it would include even uninjured members. The Court agreed. As noted by Judge Gonzalez Rogers, Plaintiffs’ broad class definition included a significant number of potentially uninjured class members, thus warranting the denial of their certification motion.

Google further argued that merely striking the reference to “personal information,” as proposed by plaintiffs, would not fix this problem. While the Court acknowledged this point, it concluded that it did not yet have enough information to make that determination. Because the Court denied plaintiffs’ certification motion with leave to amend, it encouraged the parties to address these concerns in any subsequent rounds of briefing.

In addition, Judge Gonzalez raised that plaintiffs would need to demonstrate that the RTB data produced in the matter thus far was representative of the class as a whole. While the Court agreed with plaintiffs’ argument and supporting evidence that Google “share[d] so much information about named plaintiffs that its RTB data constitute[d] ‘personal information,” Judge Gonzalez was not persuaded by their assertion that the collected RTB data would necessarily also provide common evidence for the rest of the class. The Court thus determined that plaintiffs needed to affirmatively demonstrate through additional evidence that the RTB data was representative of all putative class members, and noted for Google that it could not refuse to provide such and assert that plaintiffs had not met their burden as a result.

This decision underscores the growing complexity of litigating privacy issues in the digital age, and previews new challenges plaintiffs may face in demonstrating commonality and typicality among a proposed class in privacy litigation. The decision is also instructive for modern companies that amass various kinds of data insofar as it demonstrates that seemingly harmless pieces of that data may, in the aggregate, still be traceable to specific persons and thus qualify as personally identifying information mandating compliance with the patchwork of privacy laws throughout the U.S.

Privacy Tip #359 – GoodRx Settles with FTC for Sharing Health Information for Advertising

The Federal Trade Commission (FTC) announced on February 1, 2023 that it has settled, for $1.5M, its first enforcement action under its Health Breach Notification Rule against GoodRx Holdings, Inc., a telehealth and prescription drug provider.

According to the press release, the FTC alleged that GoodRx failed “to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.”

In the proposed federal court order (the Order), GoodRx will be “prohibited from sharing user health data with applicable third parties for advertising purposes.” The complaint alleged that GoodRx told consumers that it would not share personal health information, and it monetized users’ personal health information by sharing consumers’ information with third parties such as Facebook and Instagram to help target users with ads for personalized health and medication-specific ads.

The complaint also alleged that GoodRx “compiled lists of its users who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles. GoodRx then used that information to target these users with health-related advertisements.” It also alleges that those third parties then used the information received from GoodRx for their own internal purposes to improve the effectiveness of the advertising.

The proposed Order must be approved by a federal court before it can take effect. To address the FTC’s allegations, the Order prohibits the sharing of health data for ads; requires user consent for any other sharing; stipulates that the company must direct third parties to delete consumer health data; limits the retention of data; and implement a mandated privacy program. Click here to read the press release.

Copyright © 2023 Robinson & Cole LLP. All rights reserved.

Italian Garante Bans Google Analytics

On June 23, 2022, Italy’s data protection authority (the “Garante”) determined that a website’s use of the audience measurement tool Google Analytics is not compliant with the EU General Data Protection Regulation (“GDPR”), as the tool transfers personal data to the United States, which does not offer an adequate level of data protection. In making this determination, the Garante joins other EU data protection authorities, including the French and Austrian regulators, that also have found use of the tool to be unlawful.

The Garante determined that websites using Google Analytics collected via cookies personal data including user interactions with the website, pages visited, browser information, operating system, screen resolution, selected language, date and time of page views and user device IP address. This information was transferred to the United States without the additional safeguards for personal data required under the GDPR following the Schrems II determination, and therefore faced the possibility of governmental access. In the Garante’s ruling, website operator Caffeina Media S.r.l. was ordered to bring its processing into compliance with the GDPR within 90 days, but the ruling has wider implications as the Garante commented that it had received many “alerts and queries” relating to Google Analytics. It also stated that it called upon “all controllers to verify that the use of cookies and other tracking tools on their websites is compliant with data protection law; this applies in particular to Google Analytics and similar services.”

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

Google to Launch Google Analytics 4 in an Attempt to Address EU Privacy Concerns

On March 16, 2022, Google announced the launch of its new analytics solution, “Google Analytics 4.” Google Analytics 4 aims, among other things, to address recent developments in the EU regarding the use of analytics cookies and data transfers resulting from such use.

Background

On August 17, 2020, the non-governmental organization None of Your Business (“NOYB”) filed 101 identical complaints with 30 European Economic Area data protection authorities (“DPAs”) regarding the use of Google Analytics by various companies. The complaints focused on whether the transfer of EU personal data to Google in the U.S. through the use of cookies is permitted under the EU General Data Protection Regulation (“GDPR”), following the Schrems II judgment of the Court of Justice of the European Union. Following these complaints, the French and Austrian DPAs ruled that the transfer of EU personal data from the EU to the U.S. through the use of the Google Analytics cookie is unlawful.

Google’s New Solution

According to Google’s press release, Google Analytics 4 “is designed with privacy at its core to provide a better experience for both our customers and their users. It helps businesses meet evolving needs and user expectations, with more comprehensive and granular controls for data collection and usage.”

The most impactful change from an EU privacy standpoint is that Google Analytics 4 will no longer store IP address, thereby limiting the data transfers resulting from the use of Google Analytics that were under scrutiny in the EU following the Schrems II ruling. It remains to be seen whether this change will ease EU DPAs’ concerns about Google Analytics’ compliance with the GDPR.

Google’s previous analytics solution, Universal Analytics, will no longer be available beginning July 2023. In the meantime, companies are encouraged to transition to Google Analytics 4.

Read Google’s press release.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

CNIL Fines Google and Amazon 135 Million Euros for Alleged Cookie Violations

On December 10, 2020, the French Data Protection Authority (the “CNIL”) announced that it has levied fines of €60 million on Google LLC and €40 million on Google Ireland Limited under the French cookie rules for their alleged failure to (1) obtain the consent of users of the French version of Google’s search engine (google.fr) before setting advertising cookies on their devices; (2) provide users with adequate information about the use of cookies; and (3) implement a fully effective opt-out mechanism to enable users to refuse cookies. On the same date, the CNIL announced that it has levied a fine of €35 million on Amazon Europe Core under the same rules for its alleged failure to (1) obtain the consent of users of the amazon.fr site before setting advertising cookies on their devices; and (2) provide adequate information about the use of cookies.

Background

The French cookie rules are laid down in (1) Article 82 of the French Data Protection Act, which implements into French law the provisions of the EU ePrivacy Directive governing the use of cookies; and (2) soft law instruments aimed at guiding operators in implementing Article 82 of the French Data Protection Act in practice.

While the provisions of Article 82 of the French Data Protection Act have remained unchanged, the CNIL revised its soft law instruments to take into account the strengthened consent requirements of the EU General Data Protection Regulation (“GDPR”). On July 18, 2019, the CNIL published new guidelines on the use of cookies and similar technologies (the “Guidelines”). The Guidelines repealed the CNIL’s 2013 cookie recommendations that were no longer valid in light of the GDPR’s consent requirements. The Guidelines were to be complemented by recommendations on the practical modalities for obtaining users’ consent to set or read non-essential cookies and similar technologies on their devices (the “Recommendations”). On October 1, 2020, the CNIL published a revised version of its Guidelines and its final Recommendations. The CNIL announced that it would allow for a transition period of six months to comply with the new cookie law rules (i.e., until the end of March 2021), and that it would carry out inspections to enforce the new rules after that transition period. However, the CNIL made clear that it reserves the right to take action against certain infringements, especially in cases of particularly serious infringements of the right to privacy. In addition, the CNIL announced that it would continue to investigate infringements of the previous cookie law rules.

Against that background, on December 2019, March 6 and May 19, 2020, the CNIL carried out three remote inspections of the amazon.fr website and an onsite inspection at the premises of the French establishment of the Amazon group, Amazon Online France SAS. On March 16, 2020, the CNIL also carried out a remote inspection of the google.fr site. These inspections aimed to verify whether Google LLC and Google Ireland Limited and Amazon Europe Core complied with the French Data Protection Act, and in particular with its Article 82, when setting or reading non-essential cookies on the devices of users living in France who visited google.fr and amazon.fr websites. In its press releases, the CNIL stressed that its sanctions against Google and Amazon punished breaches of obligations that existed before the GDPR and are not part of the obligations clarified by the new Guidelines and Recommendations.

CNIL’s Jurisdiction Over Google Ireland Limited’s and Amazon Europe Core’s Cookie Practices

Google and Amazon challenged the jurisdiction of the CNIL arguing that (1) the cooperation mechanism of the GDPR (known as the one-stop-shop mechanism) should apply and the CNIL is not their lead supervisory authority for the purposes of that mechanism; and (2) their cookie practices do not fall within the territorial scope of the French Data Protection Act. Pursuant to Article 3 of the French Data Protection Act, it applies to the processing of personal data carried out in the context of the activities of an establishment of a data controller (or data processor) in France. In that respect, Amazon argued that its French establishment was not involved in the setting of cookies on the amazon.fr site and that there is no inextricable link between the activities of the French establishment and the setting of cookies by Amazon Europe Core, the Luxembourg affiliate of the Amazon group, responsible for the European Amazon websites, including the French site. Google argued that, because the one-stop-shop mechanism should apply, its Irish affiliate, Google Ireland Limited, is the actual headquarters of the Google group in Europe and thus its main establishment for the purposes of the one-stop-shop mechanism. Accordingly, the Irish Data Protection Commissioner would be the only competent supervisory authority.

Inapplicability of the One-Stop-Shop Mechanism of the GDPR

In the initial version of its Guidelines, the CNIL made clear that it may take any corrective measures and sanctions under Article 82 of the French Data Protection Act, independently of the GDPR’s cooperation and consistency mechanisms, because the French cookie rules are based on the EU ePrivacy Directive and not the GDPR. Unsurprisingly, therefore, the CNIL rejected the arguments of Google and Amazon, considering that the EU ePrivacy Directive provides for its own mechanism, designed to implement and control its application. Accordingly, the CNIL concluded that the one-stop-shop mechanism of the GDPR does not apply to the enforcement of the provisions of the EU ePrivacy Directive, as implemented under French law.

To prevent such a situation in the future and ensure consistent interpretation and enforcement of both sets of rules, the European Data Protection Board (the “EDPB”) has called for the GDPR’s cooperation and consistency mechanism to be used for the supervision of the future cookie rules under the ePrivacy Regulation, which will replace the ePrivacy Directive. The CNIL did not wish to pre-empt this future development, and applied the relevant texts literally in its cases against Google and Amazon.

CNIL’s Territorial Jurisdiction

 The CNIL, citing the rulings of the Court of Justice of the European Union in the Google Spain and Wirtschaftsakademie cases, took the view that the use of cookies on the French site (google.fr and amazon.fr respectively) was carried out in the context of the activities of the French establishment of the companies, because that establishment promotes their respective products and services in France.

Controllership Status of Google LLC

Following his investigation, the Rapporteur of the CNIL considered that Google Ireland Limited and Google LLC are joint controllers in respect of the processing consisting in accessing or storing information on the device of Google Search users living in France.

Google argued that Google Ireland Limited is solely responsible for those operations and that Google LLC is a processor. Google stressed that (1) its Irish affiliate participates in the various decision-making bodies and in the different stages of the decision-making process implemented by the group to define the characteristics of the cookies set on Google Search; and (2) differences exist between the cookies set on European users’ devices and those set on the devices of other users (e.g., shorter retention periods, no personalized ads served to children within the meaning of the GDPR, etc.), which demonstrate the decision-making autonomy of Google Ireland Limited.

In its decision, the CNIL found that Google LLC is also represented in the bodies that adopt decisions relating to the deployment of Google products within the European Economic Area and in Switzerland, and to the processing of personal data of users living in those regions. The CNIL also found that Google LLC exercises a decisive influence in those decision-making bodies. The CNIL further found that the differences in the cookie practices were just differences in implementation, mainly intended to comply with EU law. According to the CNIL, those differences do not affect the global advertising purpose for which the cookies are used. In the CNIL’s view, this purpose is also determined by Google LLC, and the differences invoked by Google are not sufficient to demonstrate the decision-making autonomy of Google Ireland Limited. In addition, the CNIL found that Google LLC also participates in the determination of the means of processing since Google LLC designs and builds the technology of cookies set on the European users’ devices. The CNIL concluded that Google LLC and Google Ireland Limited are joint controllers.

Cookie Violations

Setting of advertising cookies without obtaining the user’s prior consent

The CNIL’s inspection of the google.fr website revealed that, when users visited that site, seven cookies were automatically set on their device. Four of these cookies were advertising cookies.

In the case of Amazon, the investigation revealed that, whenever users first visited the home page of the amazon.fr website or visited the site after they clicked on an ad published on another site, more than 40 advertising cookies were automatically set on their device.

Since advertising cookies require users’ prior consent, the CNIL concluded that the companies failed to comply with the cookie consent requirement of Article 82 of the French Data Protection Act.

Lack of adequate information provided to users

When the CNIL inspected the google.fr website, the CNIL found that an information banner was displayed at the bottom of the page, with the following note: “Privacy reminder from Google,” and two buttons: “Remind me latter” and “Access now.” According to the CNIL, the banner did not provide users with information regarding the cookies that were already set on their device. Further, that information was also not immediately provided when users clicked on the “Access now” button. Google amended its cookie practices in September 2020. However, the CNIL found that the new pop-up window does not provide clear and complete information to users under Article 82 of the French Data Protection Act. In the CNIL’s view, the new pop-up window does not inform users of all the purposes of the cookies and the means available to them to refuse cookies. In particular, the CNIL found that the information provided to users does not enable them to understand the type of content and ads that may be personalized based on their behavior (e.g., whether this is geolocation-based advertising), the precise nature of the Google services that use personalization, and whether this personalization is carried out across different services. Further, the CNIL found that the terms “options” or “See more” in the new window are not explicit enough to enable users to understand how they can refuse cookies.

When inspecting the amazon.fr website, the CNIL found that the information provided to users was neither clear, nor complete. The cookie banner displayed on the site provided a general and approximate description of the purposes of the cookies (“to offer and improve our services”). Further, according to the CNIL, the “Read more” link included in the banner did not explain to users that they could refuse cookies, nor how to do so. The CNIL found that Amazon Europe Core’s failure to provide adequate information was even more obvious in the case of users visiting the site after they had clicked on an ad published on another site. In this case, no information was provided to them.

Opt-out mechanism partially defective

In the case of Google, the CNIL also found that, when a user deactivated the ad personalization on Google Search by using the mechanism available from the “Access now” button, one of the advertising cookies was still stored on the user’s device and kept reading information destined for the server to which the cookie was attached. The CNIL concluded that the opt-out mechanism was partially defective.

CNIL’s Sanctions

In setting the fines in both cases, the CNIL took into account the seriousness of the breaches of Article 82 of the French Data Protection Act, the high number of users affected by those breaches, and the financial benefits deriving from the advertising income indirectly generated from the data collected by the advertising cookies. Interestingly, in the case of Google, the CNIL cited a decision of the French Competition Authority and referred to Google’s dominant position in the online search market.

In both cases, the CNIL noted that the companies amended their cookie practices in September 2020 and stopped automatically setting advertising cookies. However, the CNIL found that the new information provided to users is still not adequate. Accordingly, the CNIL ordered the three companies to provide adequate information to users about the use of cookies on their respective sites. The CNIL also ordered a periodic penalty payment of €100,000 (i.e., the maximum amount permitted under the French Data Protection Act) for each day of delay in complying with the injunction, commencing three months following notification of the CNIL’s decision in each case.

The CNIL addressed its decisions to the French establishment of the companies in order to enforce these decisions. The companies have four months to appeal the respective decision before France’s highest administrative court (Conseil d’Etat).

Read the CNIL’s decision against Google LLC and Google Ireland Limited and the CNIL’s decision against Amazon Europe Core (currently available only in French).

Copyright © 2020, Hunton Andrews Kurth LLP. All Rights Reserved.

 

ARTICLE BY Hunton Andrews Kurth’s Privacy and Cybersecurity

 

For more articles on Google, visit the National Law Review Communications, Media & Internet section.

New U.K. Competition Unit to Focus on Facebook and Google, and Protecting News Publishers

You know your company has tremendous market power when an agency is created just to watch you.

That’s practically what has happened in the U.K. where the Competition and Markets Authority (CMA) has increased oversight of ad-driven digital platforms, namely Facebook and Google, by establishing a dedicated Digital Markets Unit (DMU). While it was created to enforce new laws to govern any platform that dominates their respective market, when the new unit starts operating in April 2021 Facebook and Google will get its full attention.

The CMA says the intention of the unit is to “give consumers more choice and control over their data, help small businesses thrive, and ensure news outlets are not forced out by their bigger rivals.” While acknowledging the “huge benefits” these platforms offer businesses and society, helping people stay in touch and share creative content, and helping companies advertise their services, the CMA noted the growing concern that the concentration of market power among so few companies is hurting growth in the tech sector, reducing innovation and “potentially” having negative effects on their individual and business customers.

The CMA said a new code and the DMU will help ensure that the platforms are not forcing unfair terms on businesses, specifically mentioning “news publishers” and the goal of “helping enhance the sustainability of high-quality online journalism and news publishing.”

The unit will have the power to suspend, block and reverse the companies’ decisions, order them to comply with the law, and fine them.

The devil will be in the details of what the new code will require, and questions remain about what specific conduct the DMU will target and what actions it will take. Will it require the companies to pay license fees to publishers for presenting previews of their content? Will the unit reduce the user data the companies may access, something that would threaten their ad revenue? Will Facebook and Google have to share data with competitors? We will learn more when the code is drafted and when the DMU begins work in April.

Once again a European nation has taken the lead on the global stage to control the downsides of technologies and platforms that have transformed how people communicate and get their news, and how companies reach them to promote their products. With the U.S. deadlocked on so many policy matters, change in the U.S. appears most likely to come as the result of litigation, such as the Department of Justice’s suit against Google, the FTC’s anticipated suit against Facebook, and private antitrust actions brought by companies and individuals.

Edited by Tom Hagy for MoginRubin LLP.

© MoginRubin LLP

ARTICLE BY Mogin Rubin
For more articles on Google, visit the National Law Review  Communications, Media & Internet section,

California Court of Appeal Rules that Challenge to Google’s Confidentiality Agreements May Proceed Past the Pleading Stage

On September 21, 2020, in a published 2-1 opinion in Doe v. Google Inc., the California Court of Appeal (Dist. 1, Div. 4), permitted three current and former Google employees to proceed with their challenge of Google’s confidentiality agreement as unlawfully overbroad and anti-competitive under the California Private Attorneys General Act (“PAGA”) (Lab. Code § 2698 et seq.).  In doing so, the Court of Appeal reversed the trial court’s order sustaining Google’s demurrer on the basis of preemption by the National Labor Relations Act (“NLRA”) (29 U.S.C. § 151 et seq.) under San Diego Bldg. Trades Council v. Garmon359 U.S. 236, 244–245 (1959).  The court held that while the plaintiffs’ claims relate to conduct arguably within the scope of the NLRA, they fall within the local interest exception to Garmon preemption and may therefore go forward.  It remains to be seen whether plaintiffs will be able to sustain their challenges to Google’s confidentiality policies on the merits.  However, Doe serves as a reminder to employers to carefully craft robust confidentiality agreements, particularly in the technology sector, in anticipation of potential challenges employees may make to those agreements.

Google requires its employees to sign various confidentiality policies.  The plaintiffs brought a lawsuit challenging these policies on the basis that they restricted their speech in violation of California law.  Specifically, the plaintiffs alleged 17 claims that fell into three subcategories based on Google’s confidentiality policies: restraints of competition, whistleblowing and freedom of speech.  The claims were brought under PAGA, a broad California law that provides a private right of action to “aggrieved employees” for any violation of the California Labor Code.  PAGA claims are brought on a representative basis—with the named plaintiffs deputized as private attorneys general—to recover penalties on behalf of all so-called “aggrieved employees,” typically state-wide, with 75% of such penalties being paid to the State and 25% to the “aggrieved employees” if the violation is proven (or a court-approved settlement is reached).

In their competition causes of action plaintiffs alleged that Google’s confidentiality rules violated Business & Professions Code sections 17200, 16600, and 16700 as well as various Labor Code provisions by preventing employees from using or disclosing the skills, knowledge, and experience they obtained at Google for purposes of competing with Google.  The court noted that section 16600 “evinces a settled legislative policy in favor of open competition and employee mobility” that has been “instrumental in the success of California’s technology industry.”  The plaintiffs complained that Google’s policies prevented them from negotiating a new job with another employer, disclosing who else works at Google, and under what circumstances the employee may be receptive to an offer from a rival employer.

With respect to their whistleblowing claims, the plaintiffs alleged that Google’s confidentiality rules prevent employees from disclosing violations of state and federal law, either within Google to their managers or outside Google to private attorneys or government officials in violation of Business & Professions Code section 17200 et seq. and Labor Code section 1102.5.  Similarly, it is alleged that the policies ostensibly prevented employees from disclosing information about unsafe or discriminatory working conditions, a right afforded to them under the Labor Code.

In their freedom of speech claims, plaintiffs alleged that Google’s confidentiality rules prevent employees from engaging in lawful conduct during non-work hours and violate state statutes entitling employees to disclose wages, working conditions, and illegal conduct under various Labor Code provisions.  The employees argued this conduct could be writing a novel about working in Silicon Valley or to even reassure their parents they are making enough money to pay their bills—i.e., matters seemingly untethered to a legitimate need for confidentiality.

While Google’s confidentiality rules contain a savings clause—confirming Google’s rules were not meant to prohibit protected activity—the plaintiffs argued that the clauses were meaningless and not implemented in its enforcement of its confidentiality agreements.

Google demurred to the entire complaint, and the trial court sustained the demurrer as to plaintiffs’ confidentiality claims, agreeing that the NLRA preempted such claims.

On appeal, the Court of Appeal recognized that the NLRA serves as a “comprehensive law governing labor relations [and] accordingly, ‘the NLRB has exclusive jurisdiction over disputes involving unfair labor practices, and “state jurisdiction must yield’ when state action would regulate conduct governed by the NLRA.  (Garmon, [supra, 359 U.S.] at pp. 244-245.)”  But the court cautioned that NLRA preemption under Garmon cannot be applied in a “mechanical fashion,” and its application requires scrutiny into whether the activity in questions is a “merely peripheral concern” of the NLRA or where the “regulated conduct touche[s] interests so deeply rooted” in state and local interests.

In analyzing the federal and state issues at state, the Court of Appeal found that several of the statutes undergirding plaintiffs’ PAGA claims did not sound in principles of “mutual benefit” that are the foundation of the NLRA but protected the plaintiff’s activities as individuals.  The court cited several examples, including Labor Code section 242 prohibition of employers preventing employees from disclosing the amount of his or her wages (a statute enacted to prevent sex discrimination) and Labor Code section 232.5, prohibiting an employee from disclosing information about the employer’s working conditions (manifesting California’s policy to prohibit restrictions on speech regarding conditions of employment).  The court likewise found that the NLRA did not protect much of the activity prohibited by the statutes that supported plaintiffs’ PAGA claims, noting that the NLRA did not prohibit rules inhibiting employees from seeking new employment and competing with Google, as plaintiffs alleged Google’s confidentiality rules did.  It further does not protect whistleblowing activity unconnected to working conditions, such as violations of securities law, false claims laws, and other laws unrelated to terms and conditions of employment.

Nevertheless, the court held that, regardless of diverging purposes of the NLRA and the laws that support the plaintiffs’ PAGA claims, plaintiffs’ claims fall squarely in the local interest exception to NLRA preemption.  Where an employer’s policies are arguably prohibited by the NLRA, the local interest exception to NLRA preemption applies when (1) there is a “significant state interest” in protecting the citizen from the challenged conduct, and (2) the exercise of state jurisdiction entails “little risk of interference” with the NLRB’s regulatory function.  The court found no difficulty in determining that an action under PAGA, where the plaintiffs are serving as a “proxy or agent of the state’s labor law enforcement agencies” grows from “deeply-rooted local interests” in regulating wages, hours, and other terms of employment.  It also found that a state’s enforcement of its minimum employment standards, particularly in relation to the plaintiffs claims in this case, were peripheral to the NLRA’s purpose of safeguarding, first and foremost, workers’ rights to join unions and engage in collective bargaining.  Thus, the court held, there was no basis for NLRA preemption in this case.

Particularly in light of this opinion, employers who require employees to execute confidentiality agreements with their employees should be cognizant of the myriad of ways that they can be challenged.  As in the case of Doe v. Google, Inc., such challenges may not be just from individuals bringing claims in their own capacity, but as private attorneys general bringing representative claims on behalf of all California employees.  Nor can NLRA preemption be mechanically applied to preempt claims based upon such agreements.  Employers would be well-advised to review their existing confidentiality agreements and consult experienced counsel before revising or rolling out such agreements.


Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.
For more articles on labor law, visit the National Law Review Labor & Employment section.

Youtube May Be an Enormous Town Square, But It’s Still Not Subject to the First Amendment

In Prager University v. Google LLC, et al., Case No. 18-15712 (9th Cir. Feb. 26, 2020), the Court of Appeals for the Ninth Circuit dismissed a First Amendment lawsuit against YouTube late last week, holding that the video hosting giant is a private forum that is free to foster particular viewpoints – and need not be content-neutral.  The victory is a significant message to other online content hosts, aggregators and service providers that they need not feel threatened by censorship claims for selecting and curating content on their systems.

The lawsuit began in 2017, when conservative media company PragerU sued YouTube for imposing restrictions on some of PragerU’s short animated educational videos.  YouTube tagged several dozen videos for age-restrictions and disabled third party advertisements on others.  PragerU claimed the restrictions constituted censorship because they muted conservative political viewpoints.

Traditionally, the First Amendment regulates only U.S. and state government actors when it comes to censoring speech; it does not touch the actions of private parties.  The Ninth Circuit noted that these principles have not “lost their vitality in the digital age.”  While this threshold question is not new, PragerU’s approach to this legal hurdle has drawn fresh interest in how courts’ conception of state action might one day shift in order to accommodate the digital re-imagining of a marketplace of ideas.

PragerU argued that YouTube should be treated as something akin to a government where it operates a “public forum for speech.”  The theory follows that because YouTube has an overwhelming share of the video sharing and streaming space, it essentially performs a “public function.”  The Ninth Circuit affirmed that public use of private resources, even on a large scale, is simply not governmental.  Just because YouTube generally invites the public to use its private property (in this case, its platform) for a specific or designated purpose, does not mean that property should lose its private character.  Similarly, the Ninth Circuit ruled almost twenty years ago that internet service provider America Online was not a government actor even though it broadly opened its networks to the public to send and receive speech.

PragerU’s theory does enjoy some support.  As the Ninth Circuit acknowledged, a private actor is a state or government entity for First Amendment purposes when it performs a public function that is “traditionally and exclusively governmental.”  In other words, the First Amendment may well still apply to private companies tasked with operating public elections or even local governmental administrative duties (for example, the proverbial “company town”).  But the Ninth Circuit simply did not accept the argument that YouTube’s function of “hosting speech on a private platform” bore any resemblance to “an activity that only governmental entities” traditionally and exclusively perform.  After all, noted the Court, even “grocery stores and comedy clubs have opened their property for speech.”  Neither was the Court persuaded that the sheer scale of YouTube’s operation – equal to perhaps many millions of grocery stores and comedy clubs – should alter the analysis.

Had the Ninth Circuit adopted PragerU’s approach, it would have been the first major judicial endorsement of the view that a private entity can convert into a public one solely where its property is opened up to significant public discourse.  Instead, the Ninth Circuit imposed and upheld a more traditional delineation between public and private actors in First Amendment jurisprudence.


© 2020 Mitchell Silberberg & Knupp LLP

See the National Law Review for more on constitutional law questions.

Hackers Eavesdrop and Obtain Sensitive Data of Users Through Home Smart Assistants

Although Amazon and Google respond to reports of vulnerabilities in popular home smart assistants Alexa and Google Home, hackers continually work hard to exploit any vulnerabilities to be able to listen to users’ every word to obtain sensitive information that can be used in future attacks.

Last week, it was reported by ZDNet that two security researchers at Security Research Labs (SRLabs) discovered that phishing and eavesdropping vectors are being used by hackers to “provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.” The hackers can use the technology that Amazon and Google provides to app developers for the Alexa and Google Home products.

By putting certain commands into the back end of a normal Alexa/Google Home app, the attacker can silence the assistant for long periods of time, although the assistant is still active. After the silence, the attacker sends a phishing message, which makes the user believe had nothing to do with the app that they interacted with. The user is then asked for the Amazon/Google password and sends a fake message to the user that looks like it is from Amazon or Google. The user is then sent a message claiming to be from Amazon or Google and asking for the user’s password. Once the hacker has access to the home assistant, the hacker can eavesdrop on the user, keep the listening device active and record the users’ conversations. Obviously, when attackers eavesdrop on every word, even when it appears the device is turned off, they can obtain information that is highly personal and can be used malevolently in the future.

The manufacturers of the home smart assistants reiterate to users that the devices will never ask for their account password. Cyber hygiene for home assistants is no different than cyber hygiene with emails.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more hacking risk mitigation, see the National Law Review Communications, Media & Internet law page.