China’s TikTok Facing Privacy & Security Scrutiny from U.S. Regulators, Lawmakers

Perhaps it is a welcome reprieve for Facebook, Google and YouTube. A competing video-sharing social media company based in China has drawn the attention of U.S. privacy officials and lawmakers, with a confidential investigation under way and public hearings taking place on Capitol Hill.

Reuters broke the story that the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) is conducting a national security review of the owners of TikTok, a social media video-sharing platform that claims a young but formidable U.S. audience of 26.5 million users. CFIUS is engaged in the context of TikTok owner ByteDance Technology Co.’s $1 billion acquisition of U.S. social media app Musical.ly two years ago, a deal ByteDance did not present to the agency for review.

Meanwhile, U.S. legislators are concerned about censorship of political content, such as coverage of protests in Hong Kong, and the location and security of personal data the company stores on U.S. citizens.

Sen. Josh Hawley (R-Mo.), Chairman of the Judiciary Committee’s Subcommittee on Crime and Terrorism, invited TikTok and others to testify in Washington this week for hearings titled “How Corporations and Big Tech Leave Our Data Exposed to Criminals, China, and Other Bad Actors.”

While TikTok did not send anyone to testify, the company’s recently appointed General Manager for North America and Australia Vanessa Pappas, formerly with YouTube, sent a letter indicating that it did not store data on U.S. citizens in China. She explained in an open letter on the TikTok website, which reads similarly to that reportedly sent to the subcommittee, that the company is very much aware of its privacy obligations and U.S. regulations and is taking a number of measures to address its obligations.

For nearly eight years Pappas served as Global Head of Creative Insights and before that Audience Development for YouTube. In late 2018 she was strategic advisor to ByteDance, and in January 2019 became TikTok’s U.S. General Manager. In July her territory expanded to North America and Australia. Selecting someone who played such a leadership position for YouTube, widely used and familiar to Americans, to lead U.S. operations may serve calm the nerves of U.S. regulators. But given U.S. tensions with China over trade, security and intellectual property, TikTok and Pappas have a way to go.

Some commentators think Facebook must enjoy watching TikTok getting its turn in the spotlight, especially since TikTok is a growing competitor to Facebook in the younger market. If just briefly, it may divert attention away from the attention being paid globally to the social media giant’s privacy and data collection practices, and the many fines.

It’s clear that TikTok has Facebook’s attention. TikTok, which allows users to create and share short videos with special effects, did a great deal of advertising on Facebook. The ads were clearly targeting the teen demographic and were apparently successful. CEO Mark Zuckerberg recently said in a speech that mentions of the Hong Kong protests were censored in TikTok feeds in China and to the United States, something TikTok denied. In a case of unfortunate timing, Zuckerberg this week posted that 100 or so software developers may have improperly accessed Facebook user data.

Since TikTok is largely a short-video sharing application, it competes at some level with YouTube in the youth market. In the third quarter of 2019, 81 percent of U.S. internet users aged 15 to 25 accessed YouTube, according to figures collected by Statista. YouTube boasts more than 126 million monthly active users in the U.S., 100 million more than TikTok.

Potential counterintelligence ‘we cannot ignore’

Last month, U.S. Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR) asked Acting Director of National Intelligence to conduct a national security probe of TikTok and other Chinese companies. Expressing concern about the collection of user data, whether the Chinese government censors content feeds to the U.S., as Zuckerberg suggested, and whether foreign influencers were using TikTok to advance their objectives.

“With over 110 million downloads in the U.S. alone,” the Schumer and Cotton letter read, “TikTok is a potential counterintelligence threat we cannot ignore. Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the U.S. and brief Congress on these findings.” They must be happy with Sen. Hawley’s hearings.

In her statement, TikTok GM Pappas offered the following assurances:

  • U.S. user data is stored in the United States with backup in Singapore — not China.
  • TikTok’s U.S. team does what’s best for the U.S. market, with “the independence to do so.”
  • The company is committed to operating with greater transparency.
  • California-based employees lead TikTok’s moderation efforts for the U.S.
  • TikTok uses machine learning tools and human content reviews.
  • Moderators review content for adherence to U.S. laws.
  • TikTok has a dedicated team focused on cybersecurity and privacy policies.
  • The company conducts internal and external reviews of its security practices.
  • TikTok is forming a committee of users to serve them responsibly.
  • The company has banned political advertising.

Both TikToc and YouTube have been stung by failing to follow the rules when it comes to the youth and children’s market. In February, TikTok agreed to pay $5.7 million to settle the FTC’s case which allege that, through the Musical.ly app, TikTok company illegally collected personal information from children. At the time it was the largest civil penalty ever obtained by the FTC in a case brought under the Children’s Online Privacy Protection Act (COPPA). The law requires websites and online services directed at children obtain parental consent before collecting personal information from kids under 13. That record was smashed in September, though, when Google and its YouTube subsidiary agreed to pay $170 million to settle allegations brought by the FTC and the New York Attorney General that YouTube was also collecting personal information from children without parental consent. The settlement required Google and YouTube to pay $136 million to the FTC and $34 million to New York.

Quality degrades when near-monopolies exist

What I am watching for here is whether (and how) TikTok and other social media platforms respond to these scandals by competing on privacy.

For example, in its early years Facebook lured users with the promise of privacy. It was eventually successful in defeating competitors that offered little in the way of privacy, such as MySpace, which fell from a high of 75.9 million users to 8 million today. But as Facebook developed a dominant position in social media through acquisition of competitors like Instagram or by amassing data, the quality of its privacy protections degraded. This is to be expected where near-monopolies exist and anticompetitive mergers are allowed to close.

Now perhaps the pendulum is swinging back. As privacy regulation and publicity around privacy transgressions increase, competitive forces may come back into play, forcing social media platforms to compete on the quality of their consumer privacy protections once again. That would be a great development for consumers.

 


© MoginRubin LLP

ARTICLE BY Jennifer M. Oliver of MoginRubin.
Edited by Tom Hagy for MoginRubin LLP.
For more on social media app privacy concerns, see the National Law Review Communications, Media & Internet law page.

California DMV Exposes 3,200 SSNs of Drivers

The California Department of Motor Vehicles (DMV) announced on November 5, 2019, that it allowed the Social Security numbers (SSNs) of 3,200 California drivers to be accessed by unauthorized individuals in other state and federal agencies, including the Internal Revenue Service, the Small Business Administration and the district attorneys’ offices in Santa Clara and San Diego counties.

According to a news report, the access included the full Social Security numbers of individuals who were being investigated for criminal activity or compliance with tax laws. Apparently, the access also allowed investigators to see which drivers didn’t have Social Security numbers, which has given immigration advocates concern.

The DMV stated that the incident was not a hack, but rather, an error, and the unauthorized access was terminated when it was discovered on August 2, 2019. Nonetheless, the DMV notified the 3,200 drivers of the incident and the exposure of their personal information. The DMV issued a statement that it has “taken additional steps to correct this error, protect this information and reaffirm our serious commitment to protect the privacy rights of all license holders.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on data security, see the National Law Review Communications, Media & Internet law page.

Can You Spy on Your Employees’ Private Facebook Group?

For years, companies have encountered issues stemming from employee communications on social media platforms. When such communications take place in private groups not accessible to anyone except approved members, though, it can be difficult for an employer to know what actually is being said. But can a company try to get intel on what’s being communicated in such forums? A recent National Labor Relations Board (NLRB) case shows that, depending on the circumstances, such actions may violate labor law.

At issue in the case was a company that was facing unionizing efforts by its employees. Some employees of the company were members of a private Facebook group and posted comments in the group about potentially forming a union. Management became aware of this activity and repeatedly asked one of its employees who had access to the group to provide management with reports about the comments. The NLRB found this conduct to be unlawful and held: “It is well-settled that an employee commits unlawful surveillance if it acts in a way that is out of the ordinary in order to observe union activity.”

This case provides another reminder that specific rules come into play when employees are considering forming a union. Generally, companies cannot:

  • Threaten employees based on their union activity
  • Interrogate workers about their union activity, sentiments, etc.
  • Make promises to employees to induce them to forgo joining a union
  • Engage in surveillance (i.e., spying) on workers’ union organizing efforts

The employer’s “spying” in this instance ran afoul of these parameters, which can have costly consequences, such as overturned discipline and backpay awards.


© 2019 BARNES & THORNBURG LLP

For more on employees’ social media use, see the National Law Review Labor & Employment law page.

Hackers Eavesdrop and Obtain Sensitive Data of Users Through Home Smart Assistants

Although Amazon and Google respond to reports of vulnerabilities in popular home smart assistants Alexa and Google Home, hackers continually work hard to exploit any vulnerabilities to be able to listen to users’ every word to obtain sensitive information that can be used in future attacks.

Last week, it was reported by ZDNet that two security researchers at Security Research Labs (SRLabs) discovered that phishing and eavesdropping vectors are being used by hackers to “provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.” The hackers can use the technology that Amazon and Google provides to app developers for the Alexa and Google Home products.

By putting certain commands into the back end of a normal Alexa/Google Home app, the attacker can silence the assistant for long periods of time, although the assistant is still active. After the silence, the attacker sends a phishing message, which makes the user believe had nothing to do with the app that they interacted with. The user is then asked for the Amazon/Google password and sends a fake message to the user that looks like it is from Amazon or Google. The user is then sent a message claiming to be from Amazon or Google and asking for the user’s password. Once the hacker has access to the home assistant, the hacker can eavesdrop on the user, keep the listening device active and record the users’ conversations. Obviously, when attackers eavesdrop on every word, even when it appears the device is turned off, they can obtain information that is highly personal and can be used malevolently in the future.

The manufacturers of the home smart assistants reiterate to users that the devices will never ask for their account password. Cyber hygiene for home assistants is no different than cyber hygiene with emails.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more hacking risk mitigation, see the National Law Review Communications, Media & Internet law page.

CCPA Alert: California Attorney General Releases Draft Regulations

On October 10, 2019, the California Attorney General released the highly anticipated draft regulations for the California Consumer Privacy Act (CCPA). The regulations focus heavily on three main areas: 1) notices to consumers, 2) consumer requests and 3) verification requirements. While the regulations focus heavily on these three topics, they also discuss special rules for minors, non-discrimination standards and other aspects of the CCPA. Despite high hopes, the regulations do not provide the clarity many companies desired. Instead, the regulations layer on new requirements while sprinkling in further ambiguities.

The most surprising new requirements proposed in the regulations include:

  • New disclosure requirements for businesses that collect personal information from more than 4,000,000 consumers
  • Businesses must acknowledge the receipt of consumer requests within 10 days
  • Businesses must honor “Do Not Sell” requests within 15 days and inform any third parties who received the personal information of the request within 90 days
  • Businesses must obtain consumer consent to use personal information for a use not disclosed at the time of collection

The following are additional highlights from each of the three main areas:

1. Notices to consumers

The regulations discuss four types of notices to consumers: notice at the time of collection, notice of the right to opt-out of the sale of personal information, notice of financial incentives and a privacy policy. All required notices must be:

  • Easy to read in plain, straightforward language
  • In a format that draws the consumer’s attention to the notice
  • Accessible to those with disabilities
  • Available in all languages in which the company regularly conducts business

The regulations make clear that it is necessary, but not sufficient, to update your privacy policy to be compliant with CCPA. You must also provide notice to consumers at the time of data collection, which must be visible and accessible before any personal information is collected. The regulations make clear that no personal information may be collected without proper notice. You may use your privacy policy as the notice at the time of collection, but you must link to a specific section of your privacy policy that provides the statutorily required notice.

The regulations specifically provide that for offline collection, businesses could provide a paper version of the notice or post prominent signage. Similar to General Data Protection Regulation (GDPR), a company may only use personal information for the purposes identified at the time of collection. Otherwise, the business must obtain explicit consent to use the personal information for a new purpose.

In addition to the privacy policy requirements in the statute itself, the regulations require more privacy policy disclosures. For example, the business must include instructions on how to verify a consumer request and how to exercise consumer rights through an agent. Further, the privacy policy must identify the following information for each category of personal information collected: the sources of the information, how the information is used and the categories of third parties to whom the information is disclosed. For businesses that collect personal information of 4,000,000 or more consumers, the regulations require additional disclosures related to the number of consumer requests and the average response times. Given the additional nuances of the disclosure requirements, we recommend working with counsel to develop your privacy policy.

If a business provides financial incentives to a consumer for allowing the sale of their personal information, then the business must provide a notice of the financial incentive. The notice must include a description of the incentive, its material terms, instructions on how to opt-in to the incentive, how to withdraw from the incentive and an explanation of why the incentive is permitted by CCPA.

Finally, the regulations state that service providers that collect personal information on behalf of a business may not use that personal information for their own purposes. Instead, they are limited to performing only their obligations under the contract between the business and service provider. The contract between the parties must also include the provisions described in CCPA to ensure that the relationship is a service provider/business relationship, and not a sale of personal information between a business and third party.

2. Consumer requests

Businesses must provide at least two methods for consumers to submit requests (most commonly an online form and a toll-free number), and one of the methods must reflect the manner in which the business primarily interacts with the consumer. In addition, businesses that substantially interact with consumers offline must provide an offline method for consumers to exercise their right to opt-out, such as providing a paper form. The regulations specifically call out that in-person retailers may therefore need three methods: a paper form, an online form and a toll-free number.

The regulations do limit some consumer request rights by prohibiting the disclosure of Social Security numbers, driver’s license numbers, financial account numbers, medical-related identification numbers, passwords, and security questions and answers. Presumably, this is for two reasons: the individual should already know this information and most of these types of information are subject to exemptions from CCPA.

One of the most notable clarifications related to requests is that the 45-day timeline to respond to a consumer request includes any time required to verify the request. Additionally, the regulations introduce a new timeline requirement for consumer requests. Specifically, businesses must confirm receipt of a request within 10 days. Another new requirement is that businesses must respond to opt-out requests within 15 days and must inform all third parties to stop selling the consumer’s information within 90 days. Further, the regulations require that businesses maintain request records logs for 24 months.

3. Verification requirements

The most helpful guidance in the regulations relates to verification requests. The regulations provide that a more rigorous verification process should apply to more sensitive information. That is, businesses should not release sensitive information without being highly certain about the identity of the individual requesting the information. Businesses should, where possible, avoid collecting new personal information during the verification process and should instead rely on confirming information already in the business’ possession. Verification can be through a password-protected account provided that consumers re-authenticate themselves. For websites that provision accounts to users, requests must be made through that account. Matching two data points provided by the consumer with data points maintained by the business constitutes verification to a reasonable degree of certainty, and the matching of three data points constitutes a high degree of certainty.

The regulations also provide prescriptive steps of what to do in cases where an identity cannot be verified. For example, if a business cannot verify the identity of a person making a request for access, then the business may proceed as if the consumer requested disclosure of only the categories of personal information, as opposed to the content of such personal information. If a business cannot verify a request for deletion, then the business should treat the request as one to opt-out of the sale of personal information.

Next steps

These draft regulations add new wrinkles, and some clarity, to what is required for CCPA compliance. As we move closer to January 1, 2020 companies should continue to focus on preparing compliant disclosures and notices, finalizing their privacy policies and establishing procedures to handle consumer requests. Despite the need to press forward on compliance, the regulations are open to initial public comment until December 6, 2019, with a promise to finalize the regulations in the spring of 2020. We expect further clarity as these draft regulations go through the comment process and privacy professionals, attorneys, businesses and other stakeholders weigh in on their clarity and reasonableness.


Copyright © 2019 Godfrey & Kahn S.C.

For more on CCPA implementation, see the National Law Review Consumer Protection law page.

LinkedIn Petitions Circuit Court for En Banc Review of hiQ Scraping Decision

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act (CFAA) liability to unwanted web scraping of publicly available social media profile data and whether once hiQ Labs, Inc. (“hiQ”), a data analytics firm, received LinkedIn’s cease-and-desist letter demanding it stop scraping public profiles, any further scraping of such data was “without authorization” within the meaning of the CFAA. The appeals court affirmed the lower court’s order granting a preliminary injunction barring LinkedIn from blocking hiQ from accessing and scraping publicly available LinkedIn member profiles to create competing business analytic products. Most notably, the Ninth Circuit held that hiQ had shown a likelihood of success on the merits in its claim that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access “without authorization” under the CFAA.

In its petition for en banc rehearing, LinkedIn advanced several arguments, including:

  • The hiQ decision conflicts with the Ninth Circuit Power Ventures precedent, where the appeals court held that a commercial entity that accesses a website after permission has been explicitly revoked can, under certain circumstances, be civilly liable under the CFAA. Power Ventures involved Facebook user data protected by password (that users initially allowed a data aggregator permission to access). LinkedIn argued that the hiQ court’s logic in distinguishing Power Ventures was flawed and that the manner in which a user classifies his or her profile data should have no bearing on a website owner’s right to protect its physical servers from trespass.

“Power Ventures thus holds that computer owners can deny authorization to access their physical servers within the meaning of the CFAA, even when users have authorized access to data stored on the owner’s servers. […] Nothing about a data owner’s decision to place her data on a website changes LinkedIn’s independent right to regulate who can access its website servers.”

  • The language of the CFAA should not be read to allow for “authorization” to be assumed (and unable to be revoked) for publicly available website data, either under Ninth Circuit precedent or under the CFAA-related case law of other circuits.

“Nothing in the CFAA’s text or the definition of ‘authorization’ that the panel employed—“[o]fficial permission to do something; sanction or warrant,” suggests that enabling websites to be publicly viewable is not ‘authorization’ that can be revoked.”

  • The privacy interests enunciated by LinkedIn on behalf of its users is “of exceptional importance,” and the court discounted the fact that hiQ is “unaccountable” and has no contractual relationship with LinkedIn users, such that hiQ could conceivably share the scraped data or aggregate it with other data.

“Instead of recognizing that LinkedIn members share their information on LinkedIn with the expectation that it will be viewed by a particular audience (human beings) in a particular way (by visiting their pages)—and that it will be subject to LinkedIn’s sophisticated technical measures designed to block automated requests—the panel assumed that LinkedIn members expect that their data will be ‘accessed by others, including for commercial purposes,’ even purposes antithetical to their privacy setting selections. That conclusion is fundamentally wrong.

Both website operators and open internet advocates will be watching closely to see if the full Ninth Circuit decides to rehear the appeal, given the importance of the CFAA issue and the prevalence of data scraping of publicly available website content. We will keep a close watch on developments.


© 2019 Proskauer Rose LLP.

Whatever Happened to that Big Ringless Voicemail Decision We Were All Expecting? It Was a Nothing Burger—For Now

You’ll recall a few weeks back TCPAWorld.com featured analysis of efforts by VoApps—makers of the DirectDrop ringless voicemail platformto stem the tide of negative TCPA rulings addressing ringless voicemail technologies. VoApps founder David King even joined the Unprecedented podcast to discuss his submission of a lengthy declaration to the court addressing how the technology works and why it is not covered by the TCPA.

Well, a few days ago the Court issued its ruling on the pending motion—a summary judgment effort by the Plaintiff—and I must say, it was rather anti-climactic. Indeed, the court punted entirely on the key issue.

In Saunders v. Dyck O’Neal, Case No. 1:17-CV-335, 2019 U.S. Dist. LEXIS 177606 (W.D. Mich. Oct. 4, 2019) the court issued its highly-anticipated ruling on the Plaintiff’s bid to earn judgment following the Court’s earlier ruling that a ringless voicemail is a call under the TCPA. It was in response to this motion that VoApps submitted a mountain of evidence that although a ringless voicemail may be a “call” it is not a call to a number assigned to a cellular service—and so such calls are not actionable under the TCPA’s infamous section 227(b).

Rather than answer the question directly the Court made mincemeat of the Federal Rules of Civil Procedure and treated the summary judgment motion as if it were some sort of motion to confirm the Court’s earlier ruling. This is weird because: i) no it wasn’t; and ii) there’s no such thing. As the Court put it: “Admittedly, Saunders moved for summary judgment, but her motion is in fact limited to a request for clarification of the impact of the Court’s prior ruling: Was the Court’s prior ruling that DONI’s messaging technology falls within the purview of the TCPA a ruling as a matter of law that binds the parties going forward? The answer is clearly yes.”

Great. So we now know what we already all knew—the Saunders court holds that a ringless voicemail is a call. Got it. As to the key issue of whether the calls were made to a landline to a cell phone, however, the Court finds: “These issues were unnecessary to Saunders’s motion, as she has not [actually] moved for summary judgment on her claim.”

So there you go. Plaintiff’s motion for summary judgment was not actually a motion for summary judgment after all. So all that work by VoApps was for nothing. But not really. Obviously this fight is not yet over. The Court declined to enter judgment in favor of the Plaintiff meaning that further work—and perhaps a trial—lies ahead for the good folks over at VoApps. We’ll keep you posted.

 



© Copyright 2019 Squire Patton Boggs (US) LLP

For more on voicemail & phone regulation, see the National Law Review Communications, Media & Internet law page.

Resist the Urge to Access: the Impact of the Stored Communications Act on Employer Self-Help Tactics

As an employer or manager, have you ever collected a resigning employee’s employer-owned laptop or cellphone and discovered that the employee left a personal email account automatically logged in? Did you have the urge to look at what the employee was doing and who the employee was talking to right before resigning? Perhaps to see if he or she was talking to your competitors or customers? If so, you should resist that urge.

The federal Stored Communications Act, 18 U.S.C. § 2701et seq., is a criminal statute that makes it an offense to “intentionally access[ ]without authorization a facility through which an electronic communication service is provided[ ]and thereby obtain[ ] . . . access to a[n] . . . electronic communication while it is in electronic storage  . . . .” It also creates a civil cause of action for victims of such offenses, remedied by (i) actual damages of at least $1,000; (ii) attorneys’ fees and court costs; and, potentially, (iii) punitive damages if the access was willful or intentional.

So how does this criminal statute apply in a situation in which an employee uses a personal email account on an employer-owned electronic device—especially if an employment policy confirms there is no expectation of privacy on the employer’s computer systems and networks? The answer is in the technology itself.

Many courts find that the “facility” referenced in the statute is the server on which the email account resides—not the company’s computer or other electronic device. In one 2013 federal case, a former employee left her personal Gmail account automatically logged in when she returned her company-owned smartphone. Her former supervisor allegedly used that smartphone to access over 48,000 emails on the former employee’s personal Gmail account. The former employee later sued her former supervisor and her former employer under the Stored Communications Act. The defendants moved to dismiss the claim, arguing, among other things, that a smartphone was not a “facility” under the statute.

While agreeing with that argument in principle, the court concluded that it was, in fact, Gmail’s server that was the “facility” for purposes of Stored Communications Act claims. The court also rejected the defendants’ arguments (i) that because it was a company-owned smartphone, the employee had in fact authorized the review, and (ii) that the former employee was responsible for any alleged loss of privacy, because she left the door open to the employer reviewing the Gmail account.

Similarly, in a 2017 federal case, a former employee sued her ex-employer for allegedly using her returned cell phone to access her Gmail account on at least 40 occasions. To assist in the prosecution of a restrictive covenant claim against the former employee, the former employer allegedly arranged to forward several of those emails to the employer’s counsel, including certain allegedly privileged emails between the former employee and her lawyer. The court denied the former employer’s motion to dismiss the claim based on those allegations.

Interestingly, some courts, including both in the above-referenced cases, draw a line on liability under the Stored Communication Act based on whether the emails that were accessed were already opened at the time of access. This line of reasoning is premised on a finding that opened-but-undeleted emails are not in “storage for backup purposes” under the Stored Communications Act. But this distinction is not universal.

In another 2013 federal case, for example, an individual sued his business partner under the Stored Communications Act after the defendant logged on to the other’s Yahoo account using his password. A jury trial resulted in a verdict for the plaintiff on that claim, and the defendant filed a motion for judgment as a matter of law. The defendant argued that she only read emails that had already been opened and that they were therefore not in “electronic storage” for “purposes of backup protection.” The court disagreed, stating that “regardless of the number of times plaintiff or defendant viewed plaintiff’s email (including by downloading it onto a web browser), the Yahoo server continued to store copies of those same emails that previously had been transmitted to plaintiff’s web browser and again to defendant’s web browser.” So again, the court read the Stored Communications Act broadly, stating that “the clear intent of the SCA was to protect a form of communication in which the citizenry clearly has a strong reasonable expectation of privacy.”

Based on the broad reading of the Stored Communications Act in which many courts across the country engage, employers and managers are well advised to exercise caution before reviewing an employee’s personal communications that may be accessible on a company electronic device. Even policies informing employees not to expect privacy on company computer systems and networks may not save the employer or manager from liability under the statute. So seek legal counsel if this opportunity presents itself upon an employee’s separation from the company. And resist the urge to access before doing so.


© 2019 Foley & Lardner LLP
For more on the Stored Communications Act, see the National Law Review Communications, Media & Internet law page.

Can We Really Forget?

I expected this post would turn out differently.

I had intended to commend the European Court of Justice for placing sensible limits on the extraterritorial enforcement of the EU’s Right to be Forgotten. They did, albeit in a limited way,[1] and it was a good decision. There.  I did it. In 154 words.

Now for the remaining 1400 or so words.

But reading the decision pushes me back into frustration at the entire Right to be Forgotten regime and its illogical and destructive basis. The fact that a court recognizes the clear fact that the EU cannot (generally) force foreign companies to violate the laws of their own countries in internet sites that are intended for use within those countries (and NOT the EU), does not come close to offsetting the logical, practical and societal problems with the way the EU perceives and enforces the Right to be Forgotten.

As a lawyer, with all decisions grounded in the U.S. Constitution, I am comfortable with the First Amendment’s protection of Freedom of Speech – that nearly any truthful utterance or publication is inviolate, and that the foundation of our political and social system depends on open exposure of facts to sunlight. Intentionally shoving those true facts into the dark is wrong in our system and openness will be protected by U.S. courts.

Believe it or not, the European Union also has such a concept at the core of its foundation too. Article 10 of the European Convention on Human Rights states that:

“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

So we have the same values, right? In both jurisdictions the right to impart information can be exercised without interference by public authority.  Not so fast.  The EU contains a litany of restrictions on this right, including a limitation of your right to free speech by the policy to protect the reputation of others.

This seems like a complete evisceration of a right to open communication if a court can force obfuscation of facts just to protect someone’s reputation.  Does this person deserve a bad reputation? Has he or she committed a crime, failed to pay his or her debts, harmed animals or children, stalked an ex-lover, or violated an oath of office, marriage, priesthood or citizenship? It doesn’t much matter in the EU. The right of that person to hide his/her bad or dangerous behavior outweighs both the allegedly fundamental right to freedom to impart true information AND the public’s right to protect itself from someone who has proven himself/herself to be a risk to the community.

So how does this tension play out over the internet? In the EU, it is law that Google and other search engines must remove links to true facts about any wrongdoer who feels his/her reputation may be tarnished by the discovery of the truth about that person’s behavior. Get into a bar fight?  Don’t worry, the EU will put the entire force of law behind your request to wipe that off your record. Stiff your painting contractors for tens of thousands of Euros despite their good performance? Don’t worry, the EU will make sure nobody can find out . Get fired, removed from office or defrocked for dishonesty? Don’t worry, the EU has your back.

And that undercutting of speech rights has now been codified in Article 17 of Regulation 2016/679, the Right to be Forgotten.

And how does this new decision affect the rule? In the past couple weeks, the Grand Chamber of the EU Court of Justice issued an opinion limiting the extraterritorial reach of the Right to be Forgotten. (Google vs CNIL, Case C‑507/17) The decision confirms that search engines must remove links to certain embarrassing instances of true reporting, but must only do so on the versions of the search engine that are intentionally servicing the EU, and not necessarily in versions of the search engines for non-EU jurisdictions.

The problems with appointing Google to be an extrajudicial magistrate enforcing vague EU-granted rights under a highly ambiguous set of standards and then fining them when you don’t like a decision you forced them to make, deserve a separate post.

Why did we even need this decision? Because the French data privacy protection agency, known as CNIL, fined Google for not removing presumably true data from non-EU search results concerning, as Reuters described, “a satirical photomontage of a female politician, an article referring to someone as a public relations officer of the Church of Scientology, the placing under investigation of a male politician and the conviction of someone for sexual assaults against minors.”  So, to be clear, while the official French agency believes it should enforce a right for people to obscure that they have been convicted of sexual assault against children from the whole world, the Grand Chamber of the European Court of Justice believes that the people convicted child sexual assault should be protected in their right to obscure these facts only from people in Europe. This is progress.

Of course, in the U.S., politicians and other public figures, under investigation or subject to satire or people convicted of sexual assault against children do not have a right to protect their reputations by forcing Google to remove links to public records or stories in news outlets. We believe both that society is better when facts are allowed to be reported and disseminated and that society is protected by reporting on formal allegations against public figures or criminal convictions of private ones.

I am glad that the EU Court of Justice is willing to restrict rules to remain within its jurisdiction where they openly conflict with the basic laws of other jurisdictions. The Court sensibly held,

“The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.[2] . . . [T]he operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.”

Any other decision would be wildly overreaching. Believe me, every country in the EU would be howling in protest if the US decided that its views of personal privacy must be enforced in Europe by European companies due to operations aimed only to affect Europe. It should work both ways. So this was a well-reasoned limitation.

But I just cannot bring myself to be complimentary of a regime that I find so repugnant – where nearly any bad action can be swept under the rug in the name of protecting a person’s reputation.

As I have written in books and articles in the past, government protection of personal privacy is crucial for the clean and correct operation of a democracy.  However, privacy is also the obvious refuge of scoundrels – people prefer to keep the bad things they do private. Who wouldn’t? But one can go overboard protecting this right, and it feels like the EU has institutionalized its leap overboard.

I would rather err on the side of sunshine, giving up some privacy in the service of revealing the truth, than err on the side of darkness, allowing bad deeds to be obscured so that those who commit them can maintain their reputations.  Clearly, the EU doesn’t agree with me.


[1] The Court, in this case, wrote, “The issues at stake therefore do not require that the provisions of Directive 95/46 be applied outside the territory of the European Union. That does not mean, however, that EU law can never require a search engine such as Google to take action at worldwide level. I do not exclude the possibility that there may be situations in which the interest of the European Union requires the application of the provisions of Directive 95/46 beyond the territory of the European Union; but in a situation such as that of the present case, there is no reason to apply the provisions of Directive 95/46 in such a way.”

[2] EU Court of Justice case C-136/17, which states, “While the data subject’s rights [to privacy] override, as a general rule, the freedom of information of internet users, that balance may, however, depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information. . . .”

 


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more EU’s GDPR enforcement, see the National Law Review Communications, Media & Internet law page.

Vimeo Hit with Class Action for Alleged Violations of Biometric Law

Vimeo, Inc. was sued last week in a class action case alleging that it violated the Illinois Biometric Information Privacy Act by “collecting, storing and using Plaintiff’s and other similarly situated individuals’ biometric identifiers and biometric information…without informed written consent.”

According to the Complaint, Vimeo “has created, collected and stored, in conjunction with its cloud-based Magisto service, thousands of “face templates” (or “face prints”)—highly detailed geometric maps of the face—from thousands of Magisto users.” The suit alleges that Vimeo creates these templates using facial recognition technology and “[E]ach face template that Vimeo extracts is unique to a particular individual, in the same way that a fingerprint or voiceprint uniquely identifies one and only one person.” The plaintiffs are trying to liken an image captured by facial recognition technology to a fingerprint by calling it a “faceprint.” Very creative in the wake of mixed reactions to the use of facial recognition technology in the Facebook and Shutterfly cases.

The suit alleges “users of Magisto upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience….Users can download and connect any mobile device to Magistoto upload and access videos and photos to produce and edit their own videos….Unbeknownst to the average consumer, and in direct violation of…BIPA, Plaintiff…believes that Magisto’s facial recognition technology scans each and every video and photo uploaded to Magisto for faces, extracts geometric data relating to the unique points and contours (i.e., biometric identifiers) of each face, and then uses that data to create and store a template of each face—all without ever informing anyone of this practice.”

The suit further alleges that when a user uploads a photo, the Magisto service creates a template for each face depicted in the photo, and compares that face with others in its face database to see if there is a match. According to the Complaint, the templates are also able to recognize gender, age and location and are able to collect biometric information from non-users. All of this is done without consent of the individuals, and in alleged violation of BIPA.

Although we previously have seen some facial recognition cases alleging violation of BIPA, and there are numerous cases alleging violation of BIPA for collection of fingerprints in the employment setting, this case is a little different from those, and it will be interesting to watch.



Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on biometrics & privacy see the National Law Review Communications, Media & Internet law page.