Head Hacking: New Devices Gather Brainspray

For more than a decade I have been warning about the vulnerability of brainspray – the brain signals that can be captured from outside your head. In 2008, this article by Jeffery Goldberg demonstrated that an fMRI machine could easily interpret how a person felt about stimuli provided – which could be a boon to totalitarian governments testing for people’s true feelings about the government or its Dear Leader. Of course in 2008 the fMRI costs two million dollars and you must lie still inside it for a useful reading to emerge.

While fMRI mind reading and lie detection is not yet ready for the courtroom, its interpretations are improving all the time and mobile units are under consideration. And its wearable cousins, like iWatches and computerized head gear are reading changes from within your body, such as electrocardiogram, heart rate, blood pressure, respiration rate, blood oxygen saturation, blood glucose, skin perspiration, capnography, body temperature, motion evaluation, cardiac implantable devices and ambient parameters. Certain head gear is calibrated just for brain waves.

Some of this is gaming equipment and some helps you meditate.  Biofeedback headsets measure your brain waves, using EEG. They’re small bands that sit easily on your head and measure activity through sensors. Several companies like MindWave, NeuroSky, Thync, and Versus all make such equipment available to the general public.

Of course, if you really want to frighten yourself about how far this technology has advances, check in on DARPA and the rest of the US Military. DARPA has been testing brainwave filtering binoculars , human brainwave driven targeting for killer robots,  and soldier brain-machine interfaces for military vehicles. And these are just the things they are currently willing to dicuss in public.

I wrote six years ago about how big companies like Honda were exploring brainspray capture, and have spoken about how Google, Facebook and other Silicon Valley giants have sunk billions of dollars into creating brain-machine interfaces and reading brainspray for practical purposes.

I will write more on this later, but be aware that hacking of this equipment is always possible, which could give the wrong people access to your brain waves and pick up if you are thinking of your bank account PIN or other sensitive matter. Your thoughts of any sort should be protected from view.  Thought-crime has always been on the other side of the line.

Now that it is possible to read your brainspray with greater certainty, we should be considering how to regulate this activity.  I don’t mind giving the search engine my information in exchange of efficient immediate searches.  But I don’t want to open my head to companies or government.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in device hacking, see the Communications, Media & Internet law page on the National Law Review.

The CCPA Is Approaching: What Businesses Need to Know about the Consumer Privacy Law

The most comprehensive data privacy law in the United States, the California Consumer Privacy Act (CCPA), will take effect on January 1, 2020. The CCPA is an expansive step in U.S. data privacy law, as it enumerates new consumer rights regarding collection and use of personal information, along with corresponding duties for businesses that trade in such information.

While the CCPA is a state law, its scope is sufficiently broad that it will apply to many businesses that may not currently consider themselves to be under the purview of California law. In addition, in the wake of the CCPA, at least a dozen other states have introduced their own comprehensive data privacy legislation, and there is heightened consideration and support for a federal law to address similar issues.

Below, we examine the contours of the CCPA to help you better understand the applicability and requirements of the new law. While portions of the CCPA remain subject to further clarification, the inevitable challenges of compliance, coupled with the growing appetite for stricter data privacy laws in the United States generally, mean that now is the time to ensure that your organization is prepared for the CCPA.

Does the CCPA apply to my business?

Many businesses may rightly wonder if a California law even applies to them, especially if they do not have operations in California. As indicated above, however, the CCPA is not necessarily limited in scope to businesses physically located in California. The law will have an impact throughout the United States and, indeed, worldwide.

The CCPA will have broad reach because it applies to each for-profit business that collects consumers’ personal information, does business in California, and satisfies at least one of three thresholds:

  • Has annual gross revenues in excess of $25 million; or
  • Alone or in combination, annually buys, receives for commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more California consumers; or
  • Derives 50 percent or more of its annual revenues from selling consumers’ personal information

While the CCPA is limited in its application to California consumers, due to the size of the California economy and its population numbers, the act will effectively apply to any data-driven business with operations in the United States.

What is considered “personal information” under the CCPA?

The CCPA’s definition of “personal information” is likely the most expansive interpretation of the term in U.S. privacy law. Per the text of the law, personal information is any “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

The CCPA goes on to note that while traditional personal identifiers such as name, address, Social Security number, passport, and the like are certainly personal information, so are a number of other categories that may not immediately come to mind, including professional or employment-related information, geolocation data, biometric data, educational information, internet activity, and even inferences drawn from the sorts of data identified above.

As a practical matter, if your business collects any information that could reasonably be linked back to an individual consumer, then you are likely collecting personal information according to the CCPA.

When does a business “collect” personal information under the CCPA?

To “collect” or the “collection” of personal information under the CCPA is any act of “buying, renting, gathering, obtaining, receiving, or accessing any personal information pertaining to a consumer by any means.” Such collection can be active or passive, direct from the consumer or via the purchase of consumer data sets. If your business is collecting personal information directly from consumers, then at or before the point of collection the CCPA imposes a notice obligation on your business to inform consumers about the categories of information to be collected and the purposes for which such information will (or may) be used.

To reiterate, if your business collects any information that could reasonably be linked back to an individual, then you are likely collecting personal information according to the CCPA.

If a business collects personal information but never sells any of it, does the CCPA still apply?

Yes. While there are additional consumer rights related to the sale of personal information, the CCPA applies to businesses that collect personal information solely for internal purposes, or that otherwise do not disclose such information.

What new rights does the CCPA give to California consumers?

The CCPA gives California consumers four primary new rights: the right to receive information on privacy practices and access information, the right to demand deletion of their personal information, the right to prohibit the sale of their information, and the right not to be subject to price discrimination based on their invocation of any of the new rights specified above.

What new obligations does a business have regarding these new consumer rights?

Businesses that fall under the purview of the CCPA have a number of new obligations under the law:

  • A business must take certain steps to assist individual consumers with exercising their rights under the CCPA. This must be accomplished by providing a link on the business’s homepage titled “Do Not Sell My Personal Information” and a separate landing page for the same. In addition, a business must update its privacy policy (or policies), or a California-specific portion of the privacy policy, to include a separate link to the new “Do Not Sell My Personal Information” page.

A business also must provide at least two mechanisms for consumers to exercise their CCPA rights by offering, at a minimum, a dedicated web page for receiving and processing such requests (the CCPA is silent on whether this web page must be separate from or can be combined with the “Do Not Sell My Personal Information” page), and a toll-free 800 number to receive the same.

  • Upon receipt of a verified consumer request to delete personal information, the business must delete that consumer’s personal information within 45 days.
  • Upon receipt of a verified consumer request for information about the collection of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Specific pieces of personal information that the business possesses about the consumer;
    • Categories of sources from which the business received personal information about the consumer;
    • A corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer; and
    • The categories of third parties with whom the business has shared the consumer’s personal information.
  • Upon receipt of a verified consumer request for information about the sale of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Categories of personal information that the business has sold about the consumer;
    • Categories of third parties to whom the business has sold the consumer’s personal information; and
    • The categories of personal information about the consumer that the business disclosed to a third party (or parties) for a business purpose.
  • Finally, a business must further update its privacy policy (or policies), or the California-specific section of such policy(s), to:
    • Identify all new rights afforded consumers by the CCPA;
    • Identify the categories of personal information that the business has collected in the preceding 12 months;
    • Include a corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer;
    • Identify the categories of personal information that the business has sold in the prior 12 months, or the fact that the business has not sold any such personal information in that time; and
    • Note the categories of third parties with whom a business has shared personal information in the preceding 12 months.

What about employee data gathered by employers for internal workplace purposes?

As currently drafted, nothing in the CCPA carves out an exception for employee data gathered by employers. A “consumer” is simply defined as a “natural person who is a California resident …,” so the law would presumably treat employees like anyone else. However, the California legislature recently passed Bill AB 25, which excludes from the CCPA information collected about a person by a business while the person is acting as a job applicant, employee, owner, officer, director, or contractor of the business, to the extent that information is collected and used exclusively in the employment context. Bill AB 25 also provides an exception for emergency contact information and other information pertaining to the administration of employee benefits. The bill awaits the governor’s signature – he has until October 13, 2019 to sign.

But not so fast – Bill AB 25 only creates a one-year reprieve for employers, rather than a permanent exception. The exceptions listed above will expire on January 1, 2021. By that time, the legislature may choose to extend the exceptions indefinitely, or businesses should be prepared to fully comply with the CCPA.

California employers would thus be wise to start considering the type of employee data they collect, and whether that information may eventually become subject to the CCPA’s requirements (either on January 1, 2021 or thereafter). Personal information is likely to be present in an employee’s job application, browsing history, and information related to payroll processing, to name a few areas. It also includes biometric data, such as fingerprints scanned for time-keeping purposes. Employers who collect employees’ biometric information, for example, would be well advised to review their biometric policies so that eventual compliance with the CCPA can be achieved gradually during this one-year grace period.

Notwithstanding this new legislation, there remains little clarity as to how the law will ultimately be applied in the employer-employee context, if and when the exceptions expire. Employers are encouraged to err on the side of caution and to reach out to experienced legal counsel for further guidance if they satisfy any one of the above thresholds.

What are the penalties for violation of the CCPA?

Violations of the CCPA are enforced by the California Attorney General’s office, which can issue civil monetary fines of up to $2,500 per violation, or $7,500 for each intentional violation. Currently, the California AG’s office must provide notice of any alleged violation and allow for a 30-day cure period before issuing any fine.

Are there any exceptions to the CCPA?

Yes, there are a number of exceptions. First, the CCPA only applies to California consumers and businesses that meet the threshold(s) identified above. If a business operates or conducts a transaction wholly outside of California then the CCPA does not apply.

There are also certain enumerated exceptions to account for federal law, such that the CCPA is pre-empted by HIPAA, the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act as it applies to personal information sold to or purchased from a credit reporting agency, and information subject to the Driver’s Privacy Protection Act.

Would it be fair to say that the CCPA is not very clear, and maybe even a bit confusing?

Yes, it would. The CCPA was drafted, debated, and enacted into law very quickly in the face of some legislative and ballot-driven pressures. As a result, the bill as enacted is a bit confusing and even contains sections that appear to contradict its other parts. The drafters of the CCPA, however, recognized this and have included provisions for the California AG’s office to provide further guidance on its intent and meaning. Amendment efforts also remain underway. As such, it is likely that the CCPA will be an evolving law for at least the short term.

Regardless, the CCPA will impose real-world requirements effective January 1, 2020, and the new wave of consumer privacy legislation it has inspired at the state and federal level is likely to bring even more of the same. It is important to address these issues now, rather than when it is too late.


© 2019 Much Shelist, P.C.

For more on the CCPA legislation, see the National Law Review Consumer Protection law page.

Recent COPPA Settlements Offer Compliance Reminders

The recently announced FTC settlement with YouTube and its parent company, as well as the 2018 settlement between the New York Office of the Attorney General and Oath, have set a new bar when it comes to COPPA compliance.

The settlements offer numerous takeaways, including reminders to those that use persistent identifiers to track children online and deliver them targeted ads.  These takeaways include, but are not limited to the following.

FTC CID attorney Joseph Simons stated that “YouTube touted its popularity with children to prospective corporate clients … yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids.”

First, under COPPA, a child-directed website or online service – or a site that has actual knowledge it’s collecting or maintaining personal information from a child – must give clear notice on its site of “what information it collects from children, how it uses such information and its disclosure practices for such information.”

Second, the website or service must give direct notice to parents of their practices “with regard to the collection, use, or disclosure of personal information from children.”

Third, prior to collecting personal information from children under 13, COPPA-covered companies must get verifiable parental consent.

COPPA’s definition of “personal information” specifically includes persistent identifiers used for behavioral advertising.  It is critical to note that third-party platforms are subject to COPPA when they have actual knowledge they are collecting personal information from users of a child-directed website.

In March 2019, the FTC handed down what, then, was the largest civil penalty ever for violations of COPPA following allegations that Musical.ly knew many of its users were children and still failed to seek parental consent.  There, the FTC charged that Musical.ly failed to provide notice on their website of the information they collect online from children, how they use it and their disclosure practices; failed to provide direct notice to parents; failed to obtain consent from parents before collecting personal information from children; failed to honor parents’ requests to delete personal information collected from children; and retained personal information for longer than reasonably necessary.

Content creators must know COPPA’s requirements.

If a platform hosting third-party content knows that content is directed to children, it is unlawful to collect personal information from viewers without getting verifiable parental consent.

While it may be fine for most commercial websites geared to a general audience to include a corner for children, it that portion of the website collects information from users, COPPA obligations are triggered.

Comprehensive COPPA policies and procedures to protect children’s privacy are a good idea.  As are competent oversight, COPPA training for relevant personnel, the identification of risks that could result in violations of COPPA, the design and implementation of reasonable controls to address the identified risks, the regular monitoring of the effectiveness of those controls, and the development and use of reasonable steps to select and retain service providers that can comply with COPPA.

The FTC and the New York Attorney General are serious about COPPA enforcement.  Companies should exercise caution with respect to such data collection practices.



© 2019 Hinch Newman LLP

Facebook “Tagged” in Certified Facial Scanning Class Action

Recently, the Ninth Circuit Court of Appeals held that an Illinois class of Facebook users can pursue a class action lawsuit arising out of Facebook’s use of facial scanning technology. A three-judge panel in Nimesh Patel, et al v. Facebook, Inc., Case No. 18-15982 issued an unanimous ruling that the mere collection of an individual’s biometric data was a sufficient actual or threatened injury under the Illinois Biometric Information Privacy Act (“BIPA”) to establish standing to sue in federal court. The Court affirmed the district court’s decision certifying a class. This creates a significant financial risk to Facebook, because the BIPA provides for statutory damages of $1,000-$5,000 for each time Facebook’s use of facial scanning technology was used in the State of Illinois.

This case is important for several reasons. First, the decision recognizes that the mere collection of biometric information may be actionable, because it creates harm to an individual’s privacy. Second, the decision highlights the possible extraterritorial application of state data privacy laws, even those that have been passed by state legislatures intending to protect only their own residents. Third, the decision lays the groundwork for a potential circuit split on what constitutes a “sufficiently concrete injury” to convey standing under the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). Fourth, due to the Illinois courts’ liberal construction and interpretation of the statute, class actions in this sphere are likely to continue to increase.

The Illinois class is challenging Facebook’s “Tag Suggestions” program, which scans for and identifies people in uploaded photographs for photo tagging. The class plaintiffs alleged that Facebook collected and stored biometric data without prior notice or consent, and without a data retention schedule that complies with BIPA. Passed in 2008, Illinois’ BIPA prohibits gathering the “scan of hand or face geometry” without users’ permission.

The district court previously denied Facebook’s numerous motions to dismiss the BIPA action on both procedural and substantive grounds and certified the class. In moving to decertify the class, Facebook argued that any BIPA violations were merely procedural and did not amount to “an injury of a concrete interest” as required by the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016).

In its ruling, the Ninth Circuit determined that Facebook’s use of facial recognition technology without users’ consent “invades an individual’s private affairs and concrete interests.” According to the Court, such privacy concerns were a sufficient injury-in-fact to establish standing, because “Facebook’s alleged collection, use, and storage of plaintiffs’ face templates here is the very substantive harm targeted by BIPA.” The Court cited with approval Rosenbach v. Six Flags Entertainment Corp., — N.E.3d —, 2019 IL 123186 (Ill. 2019), a recent Illinois Supreme Court decision similarly finding that individuals can sue under BIPA even if they suffered no damage beyond mere violation of the statute. The Ninth Circuit also suggested that “[s]imilar conduct is actionable at common law.”

On the issue of class certification, the Ninth Circuit’s decision creates a precedent for extraterritorial application of the BIPA. Facebook unsuccessfully argued that (1) the BIPA did not apply because Facebook’s collection of biometric data occurred on servers located outside of Illinois, and (2) even if BIPA could apply, individual trials must be conducted to determine whether users uploaded photos in Illinois. The Ninth Circuit rejected both arguments. The Court determined that (1) the BIPA applied if users uploaded photos or had their faces scanned in Illinois, and (2) jurisdiction could be decided on a class-wide basis. Given the cross-border nature of data use, the Court’s reasoning could be influential in future cases where a company challenges the applicability of data breach or data privacy laws that have been passed by state legislatures intending to protect their own residents.

The Ninth Circuit’s decision also lays the groundwork for a potential circuit split. In two cases from December 2018 and January 2019, a federal judge in the Northern District of Illinois reached a different conclusion than the Ninth Circuit on the issue of BIPA standing. In both cases, the Northern District of Illinois ruled that retaining an individual’s private information is not a sufficiently concrete injury to satisfy Article III standing under Spokeo. One of these cases, which concerned Google’s free Google Photos service that collects and stores face-geometry scans of uploaded photos, is currently on appeal to the Seventh Circuit.

The Ninth Circuit’s decision paves the way for a class action trial against Facebook. The case was previously only weeks away from trial when the Ninth Circuit accepted Facebook’s Rule 23(f) appeal, so the litigation is expected to return to the district court’s trial calendar soon. If Facebook is found to have violated the Illinois statute, it could be held liable for substantial damages – as much as $1000 for every “negligent” violation and $5000 for every “reckless or intentional” violation of BIPA.

BIPA class action litigation has become increasingly popular since the Illinois Legislature enacted it: over 300 putative class actions asserting BIPA violations have been filed since 2015. Illinois’ BIPA has also opened the door to other recent state legislation regulating the collection and use of biometric information. Two other states, Texas and Washington, already have specific biometric identifier privacy laws in place, although enforcement of those laws is accomplished by the state Attorney General, not private individuals. A similar California law is set to go into effect in 2020. Legislation similar to Illinois’ BIPA is also currently pending in several other states.

The Facebook case will continue to be closely watched, both in terms of the standing ruling as well as the potential extended reach of the Illinois law.


© Polsinelli PC, Polsinelli LLP in California

For more in biometric data privacy, see the National Law Review Communications, Media & Internet law page.

Will Technology Return Shame to Our Society?

The sex police are out there on the streets
Make sure the pass laws are not broken

Undercover (of the Night)The Rolling Stones

So, now we know that browsing porn in “incognito” mode doesn’t prevent those sites from leaking your dirty data courtesy of the friendly folks at Google and Facebook.  93 per cent of porn sites leak user data to a third party. Of these, Google tracks about 74 per cent of the analyzed porn sites, while Oracle tracks nearly 24 per cent sites and Facebook tracks nearly 10 per cent porn sites.  Yet, despite such stats, 30 per cent of all internet traffic still relates to porn sites.

The hacker who perpetrated the enormous Capital One data beach outed herself by oversharing on GitHub.  Had she been able to keep her trap shut, we’d probably still not know that she was in our wallets.  Did she want to get caught, or was she simply unashamed of having stolen a Queen’s ransom worth of financial data?

Many have lamented that shame (along with irony, truth and proper grammar) is dead.  I disagree.  I think that shame has been on the outward leg of a boomerang trajectory fueled by technology and is accelerating on the return trip to whack us noobs in the back of our unsuspecting heads.

Technology has allowed us to do all sorts of stuff privately that we used to have to muster the gumption to do in public.  Buying Penthouse the old-fashioned way meant you had to brave the drugstore cashier, who could turn out to be a cheerleader at your high school or your Mom’s PTA friend.  Buying the Biggie Bag at Wendy’s meant enduring the disapproving stares of vegans buying salads and diet iced tea.  Let’s not even talk about ED medication or baldness cures.

All your petty vices and vanity purchases can now be indulged in the sanctity of your bedroom.  Or so you thought.  There is no free lunch, naked or otherwise, we are coming to find.  How will society respond?

Country music advises us to dance like no one is watching and to love like we’ll never get hurt. When we are alone, we can act closer to our baser instincts.  This is why privacy is protective of creativity and subversive behaviors, and why in societies without privacy, people’s behavior regresses toward the most socially acceptable responses.  As my partner Ted Claypoole wrote in Privacy in the Age of Big Data,

“We all behave differently when we know we are being watched and listened to, and the resulting change in behavior is simply a loss of freedom – the freedom to behave in a private and comfortable fashion; the freedom to allow the less socially -careful branches of our personalities to flower. Loss of privacy reduces the spectrum of choices we can make about the most important aspects of our lives.

By providing a broader range of choices, and by freeing our choices from immediate review and censure from society, privacy enables us to be creative and to make decisions about ourselves that are outside the mainstream. Privacy grants us the room to be as creative and thought-provoking as we want to be. British scholar and law dean Timothy Macklem succinctly argues that the “isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is a sponsor and guardian to the creative and the subversive.”

For the past two decades we have let down our guard, exercising our most subversive and embarrassing expressions of id in what we thought was a private space. Now we see that such privacy was likely an illusion, and we feel as if we’ve been somehow gas lighted into showing our noteworthy bad behavior in the disapproving public square.

Exposure of the Ashley Madison affair-seeking population should have taught us this lesson, but it seems that each generation needs to learn in its own way.

The nerds will, inevitably, figure out how to continue to work and play largely unobserved.  But what of the rest of us?  Will the pincer attack of the advancing surveillance state and the denizens of the Dark Web bring shame back as a countervailing force to govern our behavior?  Will the next decade be marked as the New Puritanism?

Dwight Lyman Moody, a predominant 19th century evangelist, author, and publisher, famously said, “Character is what you are in the dark.”  Through the night vision goggles of technology, more and more of your neighbors can see who you really are and there are very few of us who can bear that kind of scrutiny.  Maybe Mick Jagger had it right all the way back in 1983, when he advised “Curl up baby/Keep it all out of sight.”  Undercover of the night indeed.



Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Personal Email Management Service Settles FTC Charges over Allegedly Deceptive Statements to Consumers over Its Access and Use of Subscribers’ Email Accounts

This week, the Federal Trade Commission (FTC) entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).

This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed.  (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).

A quick glance at headlines announcing the settlement might give the impression that the FTC found Unrollme’s entire business model unlawful or deceptive, but that is not the case.  As described below, the settlement involved only a subset of consumers who received allegedly deceptive emails to coax them into granting access to their email accounts.  The model of providing free products or services in exchange for permission to collect user information for data-driven advertising or ancillary market research remains widespread, though could face some changes when California’s CCPA consumer choice options become effective or in the event Congress passes a comprehensive data privacy law.

As part of the Unrollme registration process, users grant Unrollme access to selected personal email accounts for decluttering purposes.  However, this permission also allows Unrollme to access and scan inboxes for so-called “e-receipts” or emailed receipts from e-commerce transactions. After scanning users’ e-receipt data (which might include billing and shipping addresses and information about the purchased products or services), Unrollme’s parent company, Slice Technologies, Inc., would anonymize the data and package it into market research reports that are sold to various companies, retailers and others.  According to the FTC complaint, when some consumers declined to grant permission to their email accounts during signup, Unrollme, during the relevant time period, tried to make them reconsider by sending allegedly deceptive statements about its access (e.g, “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff”).  The FTC claimed that such messages did not tell users that access to their inboxes would also be used to collect e-receipts and to package that data for sale to outside companies, and that thousands of consumers changed their minds and signed up for Unrollme.

As part of the settlement, Unrollme is prohibited from misrepresentations about the extent to which it accesses, collects, uses, stores or shares information in connection with its email management products. Unrollme must also send an email to all current users who enrolled in Unrollme after seeing the allegedly deceptive statements and explain Unrollme’s data collection and usage practices.  Unrollme is also required to delete all e-receipt data obtained from recipients who enrolled in Unrollme after seeing the challenged statements (unless Unrollme receives affirmative consent to maintain such data from the affected consumers).

In an effort at increased transparency, Unrollme’s current home page displays several links to detailed explanations of how the service collects and analyzes user data (e.g., “How we use data”).

Interestingly, this is not the first time Unrollme’s practices have been challenged, as the company faced a privacy suit over its data mining practices last year.  (See Cooper v. Slice Technologies, Inc., No. 17-7102 (S.D.N.Y. June 6, 2018) (dismissing a privacy suit that claimed that Unrollme did not adequately disclose to consumers the extent of its data mining practices, and finding that consumers consented to a privacy policy that expressly allowed such data collection to build market research products and services).


© 2019 Proskauer Rose LLP.
This article is by Jeffrey D Neuburger of Proskauer Rose LLP.
For more on data privacy see the National Law Review Communications, Media & Internet law page.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.

Hush — They’re Listening to Us

Apple and Google have suspended their practice of reviewing recordings from users interacting with their voice assistant programs. Did you know this was happening to begin with?

These companies engaged in “grading,” a process where they review supposedly anonymized recordings of conversations people had with voice assistant program like Siri. A recent Guardian article revealed that these recordings were being passed on to service providers around the world to evaluate whether the voice assistant program was prompted intentionally, and the appropriateness of their responses to the questions users asked.

These recordings can include a user’s most private interactions and are vulnerable to being exposed. Google acknowledged “misconduct” regarding a leak of Dutch language conversation by one of its language experts contracted to refine its Google Assistant program.

Reports indicate around 1,000 conversations, captured by Google Assistant (available in Google Home smart speakers, Android devices and Chromebooks) being leaked to Belgian news outlet VRT NWS. Google audio snippets are not associated with particular user accounts as part of the review process, but some of those messages revealed sensitive information such as medical conditions and customer addresses.

Google will suspend using humans to review these recordings for at least three months, according to the Associated Press. This is yet another friendly reminder to Google Assistant users that they can turn off storing audio data to their Google account completely, or choose to auto-delete data after every three months or 18 months. Apple is also suspending grading and will review their process to improve their privacy practice.

Despite Google and Apple’s recent announcement, enforcement authorities are still looking to take action. German regulator, the Hamburg Commissioner for Data Protection and Freedom of Information, notified Google of their plan to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure.” Since the GDPR’s implementation, we haven’t seen this enforcement action utilized, but its impact is significant as it allows the enforcement authorities to halt data processing when there is “an urgent need to act in order to protect the rights and freedoms of data subjects.”

While Google allows users to opt out of some uses of their recordings; Apple has not provided users that ability other than by disabling Siri entirely. Neither privacy policy explicitly warned users of these recordings but do reserve the right to use the information collected to improve their services. Apple, however, disclosed that they will soon provide a software update to allow Siri users opt-out of participation in grading.

Since we’re talking about Google Assistant and Siri, we have to mention the third member of the voice assistant triumvirate, Amazon’s Alexa. Amazon employs temporary workers to transcribe the voice commands of its Alexa. Users can opt out of “Help[ing] Improve Amazon Services and Develop New Features” and allowing their voice recordings to be evaluated.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Control Freaks and Bond Villains

The hippy ethos that birthed early management of the internet is beginning to look quaint. Even as a military project, the core internet concept was a decentralized network of unlimited nodes that could reroute itself around danger and destruction. No one could control it because no one could truly manage it. And that was the primary feature, not a bug.

Well, not anymore.

I suppose it shouldn’t surprise us that the forces insisting on dominating their societies are generally opposed to an open internet where all information can be free. Dictators gonna dictate.

Beginning July 17, 2019, the government of Kazakhstan began intercepting all HTTPS internet traffic inside its borders. Local Kazakh ISPs must force their users to install a government-issued certificate into all devices to allow local government agents to decrypt users’ HTTPS traffic, examine its content, re-encrypt with a government certificate and send it on to its intended destination. This is the electronic equivalent of opening every envelope, photocopying the material inside, stuffing that material in a government envelope and (sometimes) sending it to the expected recipient. Except with web sites.

According to ZDNet, the Kazakh government, unsurprisingly, said the measure was “aimed at enhancing the protection of citizens, government bodies and private companies from hacker attacks, Internet fraudsters and other types of cyber threats.” As Robin Hood could have told you, the Sheriff’s actions taken to protect travelers and control brigands can easily result in government control of all traffic and information, especially when that was the plan all along. Security Boulevard reports that “Since Wednesday, all internet users in Kazakhstan have been redirected to a page instructing users to download and install the new certificate.

This is not the first time that Kazakhstan has attempted to force its citizens to install root certificate, and in 2015 the Kazakhs even applied with Mozilla to have Kazakh root certificate included in Firefox (Mozilla politely declined).

Despite creative technical solutions, we all know that Kazakhstan is not alone in restricting the internet access of its citizens. For one (gargantuan) example, China’s population of 800 million has deeply restricted internet access, and, according to the Washington Post, the Chinese citizenry can’t access Google, Facebook, YouTube or the New York Times, among many, many, many others. The Great Firewall of China, which involves legislation, government monitoring action, technology limitations and cooperation from internet and telecommunications companies. China recently clamped down on WhatsApp and VPNs, which had returned a modicum of control and privacy to the people. And China has taken these efforts two steps beyond nearly anyone else in the world by building a culture of investigation and shame, where its citizens could find their pictures on local billboard for boorish traffic or internet behavior, or in jail for questioning the ruling party on the internet. All this is well documented.

23 countries in Asia and 7 in Africa restrict torrents, pornography, political media and social media. The only two European nations that have the same restrictions are Turkey and Belarus. Politicians in the U.S. and Europe had hoped that the internet would serve as a force for freedom, knowledge and unlimited communications. Countries like Russia, Cuba and Nigeria also see the internet’s potential, but they prefer to throttle the net to choke off this potential threat to their one-party rule governments.

For these countries, there is no such thing as private. They think of privacy in context – you may keep thoughts or actions private from companies, but not the government. On the micro level, it reminds me of family dynamics –When your teenagers talk about privacy, they mean keeping information private from the adults in their lives, not friends, strangers, or even companies. Controlling governments sing the song of privacy, as long as information is not kept from them, it can be hidden from others.

The promise of Internet freedom is slipping further away from more people each year as dictators and real life versions of movie villains figure out how to use the technology for surveillance of everyday people and how to limit access to “dangerous” ideas of liberty. ICANN, the internet control organization set up by the U.S. two decades ago, has proven itself bloated and ineffective to protect the interests of private internet users.  In fact, it would be surprising if the current leaders of ICANN even felt that such protections were within its purview.

The internet is truly a global phenomenon, but it is managed at local levels, leaving certain populations vulnerable to spying and manipulation by their own governments. Those running the system seem to have resigned themselves to allowing national governments to greatly restrict the human rights of their own citizens.

A tool can be used in many different ways.  A hammer can help build a beautiful home or can be the implement of torture and murder. The internet can be a tool for freedom of thought and expression, where everyone has a publishing and communication platform.  Or it can be a tool for repression. We have come to accept more of the latter than I believed possible.

Post Script —

Also, after a harrowing last 2-5 years where freedom to speak on the internet (and social media) has exploded into horrible real-life consequences, large and small, even the most libertarian and laissez faire of First World residents is slapping the screen to find some way to moderate the flow of ignorance, evil, insanity, inanity and stupidity. This is the other side of the story and fodder for a different post.

And it is also probably time to run an updated discussion of ICANN and its role in internet management.  We heard a great deal about internet leadership in 2016, but not so much lately. Stay Tuned.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more global & domestic internet developments, see the National Law Review Communications, Medis & Intenet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.