Can You Spy on Your Employees’ Private Facebook Group?

For years, companies have encountered issues stemming from employee communications on social media platforms. When such communications take place in private groups not accessible to anyone except approved members, though, it can be difficult for an employer to know what actually is being said. But can a company try to get intel on what’s being communicated in such forums? A recent National Labor Relations Board (NLRB) case shows that, depending on the circumstances, such actions may violate labor law.

At issue in the case was a company that was facing unionizing efforts by its employees. Some employees of the company were members of a private Facebook group and posted comments in the group about potentially forming a union. Management became aware of this activity and repeatedly asked one of its employees who had access to the group to provide management with reports about the comments. The NLRB found this conduct to be unlawful and held: “It is well-settled that an employee commits unlawful surveillance if it acts in a way that is out of the ordinary in order to observe union activity.”

This case provides another reminder that specific rules come into play when employees are considering forming a union. Generally, companies cannot:

  • Threaten employees based on their union activity
  • Interrogate workers about their union activity, sentiments, etc.
  • Make promises to employees to induce them to forgo joining a union
  • Engage in surveillance (i.e., spying) on workers’ union organizing efforts

The employer’s “spying” in this instance ran afoul of these parameters, which can have costly consequences, such as overturned discipline and backpay awards.


© 2019 BARNES & THORNBURG LLP

For more on employees’ social media use, see the National Law Review Labor & Employment law page.

Facebook “Tagged” in Certified Facial Scanning Class Action

Recently, the Ninth Circuit Court of Appeals held that an Illinois class of Facebook users can pursue a class action lawsuit arising out of Facebook’s use of facial scanning technology. A three-judge panel in Nimesh Patel, et al v. Facebook, Inc., Case No. 18-15982 issued an unanimous ruling that the mere collection of an individual’s biometric data was a sufficient actual or threatened injury under the Illinois Biometric Information Privacy Act (“BIPA”) to establish standing to sue in federal court. The Court affirmed the district court’s decision certifying a class. This creates a significant financial risk to Facebook, because the BIPA provides for statutory damages of $1,000-$5,000 for each time Facebook’s use of facial scanning technology was used in the State of Illinois.

This case is important for several reasons. First, the decision recognizes that the mere collection of biometric information may be actionable, because it creates harm to an individual’s privacy. Second, the decision highlights the possible extraterritorial application of state data privacy laws, even those that have been passed by state legislatures intending to protect only their own residents. Third, the decision lays the groundwork for a potential circuit split on what constitutes a “sufficiently concrete injury” to convey standing under the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). Fourth, due to the Illinois courts’ liberal construction and interpretation of the statute, class actions in this sphere are likely to continue to increase.

The Illinois class is challenging Facebook’s “Tag Suggestions” program, which scans for and identifies people in uploaded photographs for photo tagging. The class plaintiffs alleged that Facebook collected and stored biometric data without prior notice or consent, and without a data retention schedule that complies with BIPA. Passed in 2008, Illinois’ BIPA prohibits gathering the “scan of hand or face geometry” without users’ permission.

The district court previously denied Facebook’s numerous motions to dismiss the BIPA action on both procedural and substantive grounds and certified the class. In moving to decertify the class, Facebook argued that any BIPA violations were merely procedural and did not amount to “an injury of a concrete interest” as required by the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016).

In its ruling, the Ninth Circuit determined that Facebook’s use of facial recognition technology without users’ consent “invades an individual’s private affairs and concrete interests.” According to the Court, such privacy concerns were a sufficient injury-in-fact to establish standing, because “Facebook’s alleged collection, use, and storage of plaintiffs’ face templates here is the very substantive harm targeted by BIPA.” The Court cited with approval Rosenbach v. Six Flags Entertainment Corp., — N.E.3d —, 2019 IL 123186 (Ill. 2019), a recent Illinois Supreme Court decision similarly finding that individuals can sue under BIPA even if they suffered no damage beyond mere violation of the statute. The Ninth Circuit also suggested that “[s]imilar conduct is actionable at common law.”

On the issue of class certification, the Ninth Circuit’s decision creates a precedent for extraterritorial application of the BIPA. Facebook unsuccessfully argued that (1) the BIPA did not apply because Facebook’s collection of biometric data occurred on servers located outside of Illinois, and (2) even if BIPA could apply, individual trials must be conducted to determine whether users uploaded photos in Illinois. The Ninth Circuit rejected both arguments. The Court determined that (1) the BIPA applied if users uploaded photos or had their faces scanned in Illinois, and (2) jurisdiction could be decided on a class-wide basis. Given the cross-border nature of data use, the Court’s reasoning could be influential in future cases where a company challenges the applicability of data breach or data privacy laws that have been passed by state legislatures intending to protect their own residents.

The Ninth Circuit’s decision also lays the groundwork for a potential circuit split. In two cases from December 2018 and January 2019, a federal judge in the Northern District of Illinois reached a different conclusion than the Ninth Circuit on the issue of BIPA standing. In both cases, the Northern District of Illinois ruled that retaining an individual’s private information is not a sufficiently concrete injury to satisfy Article III standing under Spokeo. One of these cases, which concerned Google’s free Google Photos service that collects and stores face-geometry scans of uploaded photos, is currently on appeal to the Seventh Circuit.

The Ninth Circuit’s decision paves the way for a class action trial against Facebook. The case was previously only weeks away from trial when the Ninth Circuit accepted Facebook’s Rule 23(f) appeal, so the litigation is expected to return to the district court’s trial calendar soon. If Facebook is found to have violated the Illinois statute, it could be held liable for substantial damages – as much as $1000 for every “negligent” violation and $5000 for every “reckless or intentional” violation of BIPA.

BIPA class action litigation has become increasingly popular since the Illinois Legislature enacted it: over 300 putative class actions asserting BIPA violations have been filed since 2015. Illinois’ BIPA has also opened the door to other recent state legislation regulating the collection and use of biometric information. Two other states, Texas and Washington, already have specific biometric identifier privacy laws in place, although enforcement of those laws is accomplished by the state Attorney General, not private individuals. A similar California law is set to go into effect in 2020. Legislation similar to Illinois’ BIPA is also currently pending in several other states.

The Facebook case will continue to be closely watched, both in terms of the standing ruling as well as the potential extended reach of the Illinois law.


© Polsinelli PC, Polsinelli LLP in California

For more in biometric data privacy, see the National Law Review Communications, Media & Internet law page.

Your Face is for Sale! The 4 Most Interesting Things About the Proposed Update to Facebook’s Governing Documents

Mintz Logo

If you use Facebook (and you likely do, if only to play some game that apparently involves crushing large amounts of candy), then you received an email last week informing you that Facebook is proposing changes to its Data Use Policy and Statement of Rights and Responsibilities.  The proposed changes are largely in response to the $20 million settlement, approved last month by a federal judge, of a class action brought against Facebook in response to its use of user names and photos in “Sponsored Stories”.

facebook on office wall

In January 2011, Facebook implemented the Sponsored Stories advertising mechanism, which turned user “likes” into product endorsements.  The claim argued that Facebook did not adequately inform its users that profile photos and user names would be used by advertisers to recommend products and services.  The claim also argued that Facebook inappropriately did not give users the ability to opt out of the Sponsored Stories advertising feature and allowed the use of the likeness and photos of minors who, the claimants argued, should have automatically been opted out of the program.  Arriving just days after the approval of the settlement, the proposed changes include an interesting mix of responses and clarifications.  These are the most noteworthy:

Your face is for sale.  Under the approved settlement, Facebook agreed to pay $20 million and give its users greater “control” over the use of information by advertisers.  Facebook did not, however, agree to let its users opt out of allowing advertisers to use information entirely.  Under the revised Statement of Rights and Responsibilities, each user gives Facebook permission to use his or her name, profile picture, content and information in connection with commercial, sponsored or related content.  Facebook further clarifies that this means that businesses or other entities will pay Facebook for the ability to display user names and profile pictures.

  • Kids, be sure to ask your parents’ permission.  By using Facebook, each user under the age of 18 represents that at least one parent or guardian has agreed to Facebook’s terms, including the use of the minor’s name, profile picture, content and information by advertisers, on that minor’s behalf.
  • Your profile photo is fair game for facial recognition scanning.  Facebook scans and compares pictures in which you are tagged so that when your friends post more photos of you, it can suggest that they tag you.  The updated Data Use Policy makes it clear that your profile photo will be scanned for this purpose as well.
  • There’s a renewed emphasis on mobile phone data.  The updated policies make it clear that Facebook and, in certain cases, third-party integrated applications, will have access to a broad array of mobile data.  This includes the use of friend lists by third party mobile applications to advertise mobile applications used by an individual’s friends.  Whereas Facebook encountered substantial difficulty in implementing Sponsored Stories and similar advertising mechanisms, Facebook’s program of allowing mobile applications to market themselves as “Suggested Apps”has been a bright spot for the company’s bottom line.  Moreover, Facebook has signed on to an agreement with California Attorney General Kamala Harris that mobile applications constitute “online services” and, as such, are governed by the same disclosure and transparency regulations applicable to websites.  The clarifications related to mobile devices and applications suggest that Facebook intends to further develop the use of mobile data as a revenue stream without risking the same type of legal action.

Facebook’s proposed revisions remain open for public comment.   While the proposed revisions are unlikely to stoke the kind of furor that past changes have inspired, they remain an interesting display of the developing give-and-take between consumers and online service providers who provide a “free” service in exchange for the right to use and monetize personal data.

Article By:

 of

A Betrayal Among Friends: Privacy Issues Surrounding Posts on Facebook

Melissa V. Skrocki of Giordano, Halleran & Ciesla, P.C. recently had an article regarding Facebook Privacy published in The National Law Review:
Exactly what protections should be granted to individuals who post information on Facebook?  This was the question before the US District Court of NJ at a summary judgment hearing in the case of Deborah Ehling v. Monmouth-Ocean Hospital Service Corp., et al., 2012 BL 131926 (D.N.J. May 30, 2012).   The Plaintiff was an employee of Monmouth-Ocean Hospital Service Corporation (“MONOC”) and also served as the Acting President of the local union for Professional Emergency Medical Services Association – New Jersey.   In her union role, she filed multiple complaints against MONOC that she claims initiated MONOC’s retaliatory conduct against her.

The Plaintiff maintained a Facebook account during the term of her employment with MONOC and designated a number of her co-workers as friends on the website.  The Plaintiff stated in her complaint that MONOC “gained access to Ms. Ehling’s Facebook account by having a supervisor(s) summon a MONOC employee, who was also one of Mr. Ehling’s Facebook friends, into an office” and “coerc[ing], strongerarm[ing], and/or threaten[ing] the employee into accessing his Facebook account on the work computer in the supervisor’s presence.” Id. at 2 citing  Am. Compl. 20.   The MONOC supervisor viewed and copied numerous postings by the Plaintiff including one regarding a shooting at the Holocaust Museum in Washington D.C. in which the Plaintiff posted:

An 88 yr old sociopath white supremacist opened fire in the Wash D.C. Holocaust Museum this morning and killed an innocent guard (leaving children).  Other guards opened fire. The 88 yr old was shot. He survived.  I blame the DC paramedics. I want to say 2 things to the DC medics. 1. WHAT WERE YOU THINKING? and 2.  This was your opportunity to really make a difference! WTF!!!! And to the other guards…go to target practice.

Id. at 2 citing Certificate of Elizabeth Duffy Ex. C, ECF No. 11.   On June 17, 2009, MONOC sent notice of the Plaintiff’s Facebook post claiming concern for a disregard of patient safety to the New Jersey Board of Nursing and the New Jersey Department of Health, Office of Emergency Medical Services.  The Plaintiff then brought her suit against MONOC in which, among other claims, she alleged a violation of the New Jersey Wiretapping and Electronic Surveillance Control Act (“NJ Wiretap Act”) and a common law invasion of privacy, for which MONOC moved for dismissal of each complaint.

The Plaintiff maintains that the Defendant violated the NJ Wiretap Act by the unauthorized accessing and monitoring of the Plaintiff’s electronic communications stored in her Facebook account.   An individual will violate the NJ Wiretap Act if that person: “(1) knowingly accesses without authorization a facility through which an electronic communication service is provided or exceeds an authorization to access that facility, and (2) thereby obtains, alters, or prevents authorized access to a wire or electronic communication while that communication is in electronic storage.” N.J.S.A 2A:156A-27(a).   The NJ courts have determined that the NJ Wiretap Act does not apply where the communication was received by the recipient, stored by such recipient and then retrieved or viewed by another without permission because “the strong expectation of privacy with respect to communication in the course of transmission significantly diminishes once transmission is complete.” White v. White, 344 N.J. Super. 211, 200 (Ch. Div. 2001).

The Court dismissed Plaintiff’s NJ Wiretap Act claim because the Plaintiff failed to allege that the Defendant viewed Plaintiff’s post during transmission.  The Court found that the Plaintiff’s comments were clearly in post-transmission storage when they were accessed by the Defendant and as such, the NJ Wiretap Act does not apply.

The Plaintiff further claimed that the Defendant committed a common law invasion of her privacy by accessing her Facebook page without permission. In order to establish a claim for invasion of privacy under New Jersey law, a plaintiff must be able to prove that: “(1) her solitude, seclusion, or private affairs were intentionally infringed upon, and that (2) this infringement would highly offend a reasonable person.” Ehling at 5 citing Bisbee v. John C. Conover Agency Inc., 186 N.J. Super. 335, 339 (App. Div. 1982).   The Plaintiff maintains that she had a reasonable expectation of privacy in her posts on the Facebook site because she had limited access to her account to those individuals deemed “friends” on the website.  Accordingly, Plaintiff maintains that her comments were not generally available to the public. Conversely, the Defendant disputes that such an expectation of privacy can arise when the Plaintiff made her post available to “dozens, if not hundreds, of people.” Ehling at 6.

The Court considered the emerging privacy issues which surround social networking sites.  It noted that:

On one end of the spectrum, there are cases holding that there is no reasonable expectation of privacy for material posted to an unprotected website that anyone can view. On the other end of the spectrum, there are cases holding that there is a reasonable expectation of privacy for individual, password-protected online communication.

Id. at 5 (emphasis and internal citations omitted).   Despite the clear boundaries on the continuum of online privacy issues, the courts have not come to a clear consensus regarding the expectations of privacy for those communications which fall somewhere in between public websites and password-protected email accounts.  Most courts acknowledge that a communication may still be considered to be private even if it has been disclosed to one or more persons.  Interestingly though, the answer of question as to how many people must know a fact before it is considered public varies greatly among the courts.  As reported by the Ehling Court, in Multimedia Wmaz v. Kuback, the court found that disclosure of information by the plaintiff to sixty people did not render it public. Id. at 6 citingMultimedia Wmaz v. Kuback,  212 Ga. App. 707, 7-0 & n. 1 (Ga. Ct. App. 1994).  Conversely, the Eight Circuit found that the plaintiff did not have an expectation of privacy when she shared certain information with two of her coworkers. Id. at 6 citing Fletcher v. Price Chopper Foods of Trumann, Inc. (8th Cir. 2000).

Given the unsettled nature of this area of law, the Court rejected the Defendant’s motion to dismiss the Plaintiff’s violation of privacy claim and will hear arguments on this point during the trial.

It will be interesting to see where the NJ District Court falls within the privacy continuum but one thing is clear: it is best to follow the advice of Facebook, “Always think before you post. Just like anything else you post on the web or send in an email, information you share on Facebook can be copied or re-shared by anyone who can see it.”

© 2012 Giordano, Halleran & Ciesla, P.C.

Lessons from the Facebook Privacy Fiasco

An article recently in The National Law Review by Dean W. Harvey of Andrews Kurth LLP regarding Facebook Privacy:

Facebook is a wildly popular social media site which allows users to share information about themselves, send messages to friends, play games and join common interest groups. It is the most visited site in the U.S., with over 100 million active U.S. users and hundreds of millions of active users worldwide.1

During the week of April 18, 2010, Facebook made material changes to the way that its users’ personal information was classified and disclosed. The changes resulted in complicated privacy settings that confused users, and in some cases, personal data which users had previously designated as private was allegedly made public. As a result, a group of petitioners, including the Electronic Privacy Information Center (“EPIC”), filed a complaint with the FTC requesting that the Commission investigate Facebook to determine whether it engaged in unfair or deceptive trade practices (“Complaint”).

Allegations

The Complaint claimed that Facebook violated its own privacy policy, disclosed personal information of Facebook users without consent, and engaged in unfair and deceptive trade practices. Specifically, the Complaint alleged that among other things:

  • Facebook made publicly available personal information which users had previous designated as private.2
  • Facebook disclosed to third parties information that users designated as available to Friends Only (including to third-party websites, applications, other Facebook users and outsiders who happen on to Facebook pages).3
  • Facebook claimed that none of user’s information was shared with sites visited via a plug-in (such as the Like button, Recommend button, etc.). However, such plug-ins may reveal users’ personal data to such websites without consent.4
  • Facebook designed privacy settings “to confuse users and to frustrate attempts to limit the public disclosure of personal information . . .”5
  • Although the Facebook terms which many users accepted indicated that developers would be limited to a 24-hour retention period for any user data, Facebook announced that the limit no longer exists.6

Angry End Users

Regardless of whether each of the above allegations is true, it is clear that Facebook’s changes to its privacy practices inflamed some of its users. In support of its allegations, the EPIC Complaint included quotes from experts and users about Facebook’s privacy practices such as:

“I shouldn’t have to dive into complicated settings that give the fiction of privacy control but don’t, since they are so hard to understand that they’re ignored. I shouldn’t need a flowchart to understand what friends of friends of friends can share with others. Things should be naturally clear and easy for me.”7

“Facebook constantly is changing the privacy rules and I’m forced to hack through the jungle of their well-hidden privacy controls to prune out new types of permissions Facebook recently added. I have no idea how much of my personal information was released before I learned of a new angle the company has developed to give my information to others.”8

“‘Instant Personalization’ is turned on automatically by default. That means instead of giving you the option to “opt-in” and give your permission for this to happen, Facebook is making you “opt-out,” essentially using your information how they see fit unless you make the extra effort to turn that feature off.”9

“Facebook has become Big Brother. Facebook has succeeded in giving its users the allusion [sic] of privacy on a public site, leaving everyone to become complacent about keeping track of the myriad changes going on behind the scenes. The constant changes assure Facebook that you can never keep all your information private.”10

The Proposed Settlement

The FTC investigated the Complaint and ultimately agreed to a proposed settlement agreement containing a consent order.11 Without admitting liability, Facebook has agreed to a settlement that among other things requires the following:

  • Facebook must establish, implement and maintain a comprehensive privacy program designed to: (1) address privacy risks related to the development and management of new and existing product and services for consumers; and (2) protect the privacy and security of covered information.12
  • Facebook must obtain an independent third-party audit every other year for the next 20 years certifying that the Facebook privacy program meets or exceeds the requirements of the FTC order;13
  • Facebook is required to obtain express consent from a user before enacting changes that override the user’s privacy preferences;14
  • Facebook is required to prevent third parties from accessing user data after the user has deleted (with exceptions for legal compliance and fraud prevention).15

Lessons from the Complaint and Order

Facebook received significant negative publicity, incurred legal costs and business disruption associated with a government investigation, and will incur compliance costs for the next 20 years as a result of the proposed settlement. Businesses that deal with consumer information would be well advised to learn from Facebook’s experience. There are several lessons that businesses can draw from the Facebook privacy fiasco in dealing with data privacy issues.

A. Don’t Make Your Customers Angry

Facebook’s intentions in making the changes to its privacy settings may have been entirely good. For example, Facebook may have honestly been trying to improve its user experience. However, the changes significantly angered some of its customers. The lesson to be learned here is that intentions don’t matter if you anger your customers with your changes. The ultimate user experience may be better, the site may objectively offer more functionality, but none of that matters if users are offended by the process.

Businesses need to achieve innovations and improvements in the use of consumer data with user consent, and without breaking prior promises. Keeping your customers satisfied isn’t just good business, it also greatly reduces the likelihood that they will be filing deceptive trade practice complaints with the FTC.

B. Keep the Privacy Settings Simple

Much of the Complaint is dedicated to showing how complicated the Facebook settings are, and many of the quoted user statements underscore that issue as well. Such complexity often leads to errors (such as permitting applications to access personal information of a user through the user’s friends). Even when the settings work perfectly, the average person may find such complexity frustrating, leading to angry end users.

It is important to keep privacy policies simple and establish privacy settings so that they can be easily understood by an average user. Informed consent is really only obtained when the user understands the policy or setting to which he or she is consenting.

C. Consider How Applications Access User Data

When drafting a privacy policy, it is easy to focus on the organization’s use of data for internal purposes and with its vendors and subcontractors. However, special care must be taken with use of consumers’ data by software applications. For example, it is alleged that Facebook indicated applications only had access to the user information necessary for their operation, when the applications in fact had access to all user information.

In order to accurately describe how applications use consumer data in your privacy policy, you have to investigate the operation of the applications on your site, document that operation, and establish IT policies and procedures governing the use of data by new or modified applications. If you do not take these steps, it is likely that any promise regarding the use of data by applications will become misleading over time as the applications change and are updated.

D. Monitor Linking and Other Advertising Arrangements

Linking and advertising arrangements are the lifeblood of many sites. In order to make accurate statements about the types of data shared in such arrangements, it is necessary to review the contracts to understand what types of user data will be shared through business processes. However, this is not sufficient to ensure that the full use of data is understood. Just as with applications, it is necessary to investigate what data is collected or shared in the process of passing the user to the third party. Similar to applications, it is important to document what user data is permitted to be shared with advertisers and other third parties, and to establish IT policies and procedures to enforce such permitted uses.

E. Don’t Make User Data Public Without Consent.

One of the problems many businesses face with privacy policies is that as their business changes, the types of user data that they want to access or use may change as well. However, it is important to remember that no matter what the motive, if you have promised to keep certain elements of user data private in your privacy policy, you should not make it public by default without first obtaining affirmative user consent.

Privacy compliance is difficult in a changing online environment, even for businesses that don’t have hundreds of millions of users. The Complaint and Order in the Facebook matter highlight some of the many ways that a business can go wrong in protecting private consumer information. In order to successfully protect such information, a business which deals extensively with consumer data should establish, maintain, update and enforce a comprehensive privacy and security program, which takes into account material risks as well as lessons learned from the experience of other companies, such as Facebook.

1. In the Matter of Facebook, Inc., Complaint paragraph 31 (May 5, 2010); available at http://epic.org/privacy/facebook/EPIC_FTC_FB_Complaint.pdf.

2. Id. at paragraph 55.

3. Id. at paragraph 59.

4. Id. at paragraph 65.

5. Id. at paragraph 64.

6. Id. at paragraphs 92-94.

7. Id. at paragraph 95.

8. Id. at paragraph 97.

9. Id. at paragraph 98.

10. Id. at paragraph 106.

11. In the Matter of Facebook, Inc. File No. 092 3184, Agreement Containing Consent Order (“Order”); available athttp://www.ftc.gov/os/caselist/0923184/111129facebookagree.pdf.

12. Id. at paragraph IV.

13. Id. at paragraph V.

14. Id. at paragraph II.

15. Id. at paragraph III.

© 2012 Andrews Kurth LLP