D.C. District Court Limits the HIPAA Privacy Rule Requirement for Covered Entities to Provide Access to Records

On January 23, 2020, the D.C. District Court narrowed an individual’s right to request that HIPAA covered entities furnish the individual’s own protected health information (“PHI”) to a third party at the individuals’ request, and removed the cap on the fee covered entities may charge to transmit that PHI to a third party.

Specifically the Court stated that individuals may only direct PHI in an electronic format to such third parties, and that HIPAA covered entities, and their business associates, are not subject to reasonable, and cost-based fees for PHI directed to third parties.

The HIPAA Privacy Rule grants individuals with rights to access their PHI in a designated record set, and it specifies the data formats and permissible fees that HIPAA covered entities (and their business associates) may charge for such production. See 45 C.F.R. § 164.524. When individuals request copies of their own PHI, the Privacy Rule permits a HIPAA covered entity (or its business associate) to charge a reasonable, cost-based fee, that excludes, for example, search and retrieval costs. See 45 C.F.R. § 164.524(c) (4). But, when an individual requests his or her own PHI to be sent to a third party, both the required format of that data (electronic or otherwise) and the fees that a covered entity may charge for that service have been the subject of additional OCR guidance over the years—guidance that the D.C. District Court has now, in part, vacated.

The Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act set a statutory cap on the fee that a covered entity may charge an individual for delivering records in an electronic form. 42 U.S.C. § 17935(e)(3). Then, in the 2013 Omnibus Rule, developed pursuant to Administrative Procedure Act rulemaking, the Department of Health and Human Services, Office for Civil Rights (“HHS OCR”) implemented the HITECH Act statutory fee cap in two ways. First, OCR determined that the fee cap applied regardless of the format of the PHI—electronic or otherwise. Second, OCR stated the fee cap also applied if the individual requested that a third party receive the PHI. 78 Fed. Reg. 5566, 5631 (Jan. 25, 2013). Finally, in its 2016 Guidance document on individual access rights, OCR provided additional information regarding these provisions of the HIPAA Privacy Rule. OCR’s FAQ on this topic is available here.

The D.C. District Court struck down OCR’s 2013 and 2016 implementation of the HITECH Act, in part. Specifically, OCR’s 2013 HIPAA Omnibus Final Rule compelling delivery of protected health information (PHI) to third parties regardless of the records’ format is arbitrary and capricious insofar as it goes beyond the statutory requirements set by Congress. That statute requires only that covered entities, upon an individual’s request, transmit PHI to a third party in electronic form. Additionally, OCR’s broadening of the fee limitation under 45 C.F.R. § 164.524(c)(4) in the 2016 Guidance document titled “Individuals’ Right under HIPAA to Access their Health Information 45 C.F.R. Sec. 164.524” violates the APA, because HHS did not follow the requisite notice and comment procedure.” Ciox Health, LLC v. Azar, et al., No. 18-cv0040 (D.D.C. January 23, 2020).

All other requirements for patient access remain the same, including required time frames for the provision of access to individuals, and to third parties designated by such individuals. It remains to be seen, however, how HHS will move forward after these developments from a litigation perspective and how this decision will affect other HHS priorities, such as interoperability and information blocking.


© Polsinelli PC, Polsinelli LLP in California

For more on HIPAA Regulation, see the National Law Review Health Law & Managed Care section.

The Shell Game Played with Your DNA, or 23 and Screwing Me

So you want to know how much Neanderthal is in your genes.

You are curious about the percentage of Serbo-Croatian, Hmong, Sephardim or Ashanti blood that runs through your veins. Or maybe you hope to find a rich great-aunt near death, seeking an heir.

How much is this worth to you?  Two hundred bucks? That makes sense.

But what about other costs:

– like sending your cousin to prison for life (and discovering that you grew up with a serial killer)?

– like all major companies refusing to insure you due to your genetic make-up?

— like ruining your family holidays when you find that your grandfather is not really genetically linked to you and grandma had been playing the field?

– like pharma companies making millions using your genetic code to create new drugs and not crediting you at all (not even with discounts on the drugs created by testing your cells)?

– like finding that your “de-identified” genetic code has been re-identified on the internet, exposing genetic propensity for alcoholism or birth defects that turn your fiancé’s parents against you?

How much are these costs worth to you?

According to former FDA commissioner Peter Pitts, writing in Forbes, “The [private DNA testing] industry’s rapid growth rests on a dangerous delusion that genetic data is kept private. Most people assume this sensitive information simply sits in a secure database, protected from hacks and misuse. Far from it. Genetic-testing companies cannot guarantee privacy. And many are actively selling user data to outside parties.” Including law enforcement.

Nothing in US Federal health law protects the privacy of DNA test subjects at “non-therapeutic” labs like Ancestry or 23andMe. Information gleaned from the DNA can be used for almost anything.  As Pitts said, “Imagine a political campaign exposing a rival’s elevated risk of Alzheimer’s. Or an employer refusing to hire someone because autism runs in her family. Imagine a world where people can have their genomic building blocks held against them. Such abuses represent a profound violation of privacy. That’s an inherent risk in current genetic-testing practices.”

Genetic testing companies quietly, and some would argue without adequate explanation of facts and harms which are lost in a thousand words of fine print that most data subjects won’t read, push their customers to allow genetic testing on the customer samples provided. Up to 80% of 23andMe customers consent to this activity, likely not knowing that the company plans to make money off the drugs developed from customer DNA. Federal laws require labs like those used by 23andMe for drug development to keep information for more than 10 years, so once they have it, despite rights to erasure provided by California and the EU, 23andMe can refuse to drop your data from its tests.

Go see the HBO movie starring Oprah Winfrey about medical exploitation of the cell lines of Henrietta Lacks, or better yet, read the bestselling book it was based on. Observe that an engaging, vivacious woman who couldn’t afford health insurance was farmed for a line of her cancer cells that assisted medical science for decades and made millions of dollars for pharma companies without any permission from or benefit to the woman whose cells were taken.  Or any benefit to her family once cancer killed her. Companies secured over 11,000 patents using her cell lines. This is the business model now adopted by 23andMe. Take your valuable data under the guise of providing information to you, but quietly turning that data into profitable products for their shareholders’ and executives’ benefit. Not to mention that 23andMe can change its policies at any time.

As part of selling your genetic secrets to the highest bidder, 23andMe is constantly pushing surveys out to its customers. According to an article in Wired, 23andMe Founder Ann Wojcicki said, “We specialize in capturing phenotypic data on people longitudinally—on average 300 data points on each customer. That’s the most valuable by far.” Which means they are selling not only your DNA information, but all the other data you give them about your family and lifestyle.

This deep ethical compromise by 23andMe is personal for me, and not because I have sent them any DNA samples – I haven’t and I never would. But because, when questioned publicly about their trustworthiness by me and others close to me, 23andMe has not tried to explain its policies, but has simply attacked the questioners in public. Methinks the amoral vultures doth protest too much.

For example, a couple of years ago, my friend, co-author and privacy expert Theresa Payton noted on a Fox News segment that people who provide DNA information to 23andMe do not know how such data will be used because the industry is not regulated and the company could change its policies any time. 23andMe was prompt and nasty in its response, attacking Ms. Payton on Twitter and probably elsewhere, claiming that the 23andMe privacy policy, as it existed at the time, was proof that no surprises could ever be in store for naïve consumers who gave their most intimate secrets to this company.

[BTW, for the inevitable attacks coming from 23andMe and their army of online protectors, the FTC endorsement guidelines require that if there is a material connection between you and 23andMe, paid or otherwise, you need to clearly and conspicuously disclose it.]

Clearly Ms. Payton was correct and 23andMe’s attacks on her were simply wrong.

Guess what? According to the Wall Street Journal, 23andMe sold a $300 MM stake in itself to GlaxoSmithKline recently and, “For 23andMe, using genetic data for drug research ‘was always part of the vision,’ according to Emily Drabant Conley, vice president and head of business development.” So this sneaky path is not even a new tactic. According to the same WSJ story, “23andMe has long wanted to use genetic data for drug development. Initially, it shared its data with drug makers including Pfizer Inc. and Roche Holding AG ’s Genentech but wasn’t involved in subsequent drug discovery. It later set up its own research unit but found it lacked the scale required to build a pipeline of medicines. Its partnership with Glaxo is now accelerating those efforts.”

And now 23andMe has licensed an antibody it developed to treat inflammatory diseases to Spanish drug maker Almirall SA. “This is a seminal moment for 23andMe,” said Conley. “We’ve now gone from database to discovery to developing a drug.” In the WSJ, Arthur Caplan, a professor of bioethics at NYU School of Medicine said “You get this gigantic valuable treasure chest, and people are going to wind up paying for it twice. All the people who sent in DNA will be paying the same price for any drugs that are developed as anybody else.”

So this adds another ironic dimension to the old television adage, “You aren’t the customer, you are the product.” You pay to provide your DNA – the code to your entire physical existence – to a private company. Why? Possibly because you want information that may affect your healthcare, but in all likelihood you simply intend to use the information for general entertainment and information purposes.

You likely send a swab to the DNA company because you want to learn your ethnic heritage and/or see what interesting things they can tell you about why you have a photic sneeze reflex, if you are genetically inclined to react strongly to caffeine, or if you are carrier of a loathsome disease (which you could learn for an additional fee). But the company uses the physical cells from your body not only to build databases of commercially valuable information, but to develop drugs and sell them to the pharmaceutical industry. So who is the DNA company’s customer? 23andMe and its competitors take physical specimens from you and sell products made from those specimens to their real customers, the drug companies and the data aggregators.

These DNA processing firms may be the tip of the spear, because huge data companies are coming for your health information. According to the Wall Street Journal,

“Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.”

And medical companies are now tracking patient information with wearables like smartwatches, so that personally captured daily health data is now making its way into these databases.

And, of course, other risk issues affect the people who provide data to such services.  We know through reporting following the capture of the Golden State Killer that certain genetic testing labs (like GEDMatch) have been more free than others with sharing customer DNA with law enforcement without asking for warrants, subpoenas or court orders, and that such data can not only implicate the DNA contributors but their entire families as well. In addition, while DNA testing companies claim to only sell anonymized data, the information may not remain that way.

Linda Avey, co-founder of 23andMe, concedes that nothing is foolproof. She told an online magazine, “It’s a fallacy to think that genomic data can be fully anonymized.” This articles showed that researchers have already re-identified people from their publicly available genomic data. For example, one 2013 study matched Y-chromosome data with names posted in places such as genealogy sites. In another study that same year, Harvard Professor Latanya Sweeney re-identified 84 to 97 percent of a sample of Personal Genome Project volunteers by comparing gender, postal code and date of birth with public records.

2015 study re-identified nearly a quarter of a sample of users sequenced by 23andMe who had posted their information to the sharing site openSNP. “The matching risk will continuously increase with the progress of genomic knowledge, which raises serious questions about the genomic privacy of participants in genomic datasets,” concludes the paper in Proceedings on Privacy Enhancing Technologies. “We should also recall that, once an individual’s genomic data is identified, the genomic privacy of all his close family members is also potentially threatened.” DNA data is the ultimate genie, that once released from the bottle, can’t be changed, shielded or stuffed back inside, and that threatens both the data subject and her entire family for generations.

And let us not forget the most basic risk involved in gathering important data. This article has focused on how 23andMe and other private DNA companies have chosen to use the data – probably in ways that their DNA contributing customers did not truly understand – to turn a profit for investors.  But collecting such data could have unintended consequences.  It can be lost to hackers, spies or others who might steal it for their own purposes.  It can be exposed in government investigations through subpoenas or court orders that a company is incapable of resisting.

So people planning to plaster their deepest internal and family secrets into private company databases should consider the risks that the private DNA mills don’t want you to think about.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in health data privacy, see the National Law Review Health Law & Managed Care section.

My Business Is In Arizona, Why Do I Care About California Privacy Laws? How the CCPA Impacts Arizona Businesses

Arizona businesses are not typically concerned about complying with the newest California laws going into effect. However, one California law in particular—the CCPA or California Consumer Privacy Act—has a scope that extends far beyond California’s border with Arizona. Indeed, businesses all over the world that have customers or operations in California must now be mindful of whether the CCPA applies to them and, if so, whether they are in compliance.

What is the CCPA?

The CCPA is a comprehensive data privacy regulation enacted by the California Legislature that became effective on January 1, 2020. It was passed on September 13, 2018 and has undergone a series of substantive amendments over the past year and a few months.

Generally, the CCPA gives California consumers a series of rights with respect to how companies acquire, store, use, and sell their personal data. The CCPA’s combination of mandatory disclosures and notices, rights of access, rights of deletion, statutory fines, and threat of civil lawsuits is a significant move towards empowering consumers to control their personal data.

Many California businesses are scrambling to implement the necessary policies and procedures to comply with the CCPA in 2020. In fact, you may have begun to notice privacy notices on the primary landing page for national businesses. However, Arizona businesses cannot assume that the CCPA stops at the Arizona border.

Does the CCPA apply to my business in Arizona?

The CCPA has specific criteria for whether a company is considered a California business. The CCPA applies to for-profit businesses “doing business in the State of California” that also:

  • Have annual gross revenues in excess of twenty-five million dollars; or
  • Handle data of more than 50,000 California consumers or devices per year; or
  • Have 50% or more of revenue generated by selling California consumers’ personal information

The CCPA does not include an express definition of what it means to be “doing business” in California. While it will take courts some time to interpret the scope of the CCPA, any business with significant sales, employees, property, or operations in California should consider whether the CCPA might apply to them.

How do I know if I am collecting a consumer’s personal information?

“Personal information” under the CCPA generally includes any information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked” with a specific consumer. As the legalese of this definition implies, “personal information” includes a wide swath of data that your company may already be collecting about consumers.

There is no doubt that personal identifiers like name, address, email addresses, social security numbers, etc. are personal information. But information like biometric data, search and browsing activity, IP addresses, purchase history, and professional or employment-related information are all expressly included under the CCPA’s definition. Moreover, the broad nature of the CCPA means that other categories of data collected—although not expressly identified by the CCPA—may be deemed to be “personal information” in an enforcement action.

What can I do to comply with the CCPA?

If the CCPA might apply to your company, now is the time to take action. Compliance will necessarily be different for each business depending on the nature of its operation and the use(s) of personal information. However, there are some common steps that each company can take.

The first step towards compliance with the CCPA is understanding what data your company collects, how it is stored, whether it is transferred or sold, and whether any vendors or subsidiaries also have access to the data. Next, an organization should prepare a privacy notice that complies with the CCPA to post on its website and include in its app interface.

The most substantial step in complying with the CCPA is to develop and implement policies and procedures that help the company conform to the various provisions of the CCPA. The policies will need to provide up-front disclosures to consumers, allow consumers to opt-out, handle consumer requests to produce or delete personal information, and guard against any perceived discrimination against consumers that exercise rights under the CCPA.

The company will also need to review contracts with third-party service providers and vendors to ensure it can comply with the CCPA. For example, if a third-party cloud service will be storing personal information, the company will want to verify that its contract allows it to assemble and produce that information within statutory deadlines if requested by a consumer.

At least you have some time!

The good news is that the CCPA includes a grace period until July 1, 2020 before the California Attorney General can bring enforcement actions. Thus, Arizona businesses that may have ignored the quirky California privacy law to this point have a window to bring their operations into compliance. However, Arizona companies that may need to comply with the CCPA should consult with counsel as soon as possible to begin the process. The attorneys at Ryley Carlock & Applewhite are ready to help you analyze your risk and comply with the CCPA.


Copyright © 2020 Ryley Carlock & Applewhite. A Professional Association. All Rights Reserved.

Learn more about the California Consumer Privacy Act (CCPA) on the National Law Review Communications, Media & Internet Law page.

Florida’s Legislature to Consider Consumer Data Privacy Bill Akin to California’s CCPA

Florida lawmakers have proposed data privacy legislation that, if adopted, would impose significant new obligations on companies offering a website or online service to Florida residents, including allowing consumers to “opt out” of the sale of their personal information. While the bill (SB 1670 and HB 963) does not go as far as did the recent California Consumer Privacy Act, its adoption would mark a significant increase in Florida residents’ privacy rights. Companies that have an online presence in Florida should study the proposed legislation carefully. Our initial take on the proposed legislation appears below.

The proposed legislation requires an “operator” of a website or online service to provide consumers with (i) a “notice” regarding the personal information collected from consumers on the operator’s website or through the service and (ii) an opportunity to “opt out” of the sale of certain of a consumer’s personal information, known as “covered information” in the draft statute.

The “notice” would need to include several items. Most importantly, the operator would have to disclose “the categories of covered information that the operator collects through its website or online service about consumers who use [them] … and the categories of third parties with whom the operator may share such covered information.” The notice would also have to disclose “a description of the process, if applicable, for a consumer who uses or visits the website or online service to review and request changes to any of his or her covered information. . . .” The bill does not otherwise list when this “process” would be “applicable,” and it nowhere else appears to create for consumers any right to review and request changes.

While the draft legislation obligates operators to stop selling data of a consumer who submits a verified request to do so, it does not appear to require a description of those rights in the “notice.” That may just be an oversight in drafting. In any event, the bill is notable as it would be the first Florida law to require an online privacy notice. Further, a “sale” is defined as an exchange of covered information “for monetary consideration,” which is narrower than its CCPA counterpart, and contains exceptions for disclosures to an entity that merely processes information for the operator.

There are also significant questions about which entities would be subject to the proposed law. An “operator” is defined as a person who owns or operates a website or online service for commercial purposes, collects and maintains covered information from Florida residents, and purposefully directs activities toward the state. That “and” is assumed, as the proposed bill does not state whether those three requirements are conjunctive or disjunctive.

Excluded from the definition of “operator” is a financial institution (such as a bank or insurance company) already subject to the Gramm-Leach-Bliley Act, and an entity subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Outside of the definition of “operator,” the proposed legislation appears to further restrict the companies to which it would apply, to eliminate its application to smaller companies based in Florida, described as entities “located in this state,” whose “revenue is derived primarily from a source other than the sale or lease of goods, services, or credit on websites or online services,” and “whose website or online service has fewer than 20,000 unique visitors per year.” Again, that “and” is assumed as the bill does not specify “and” or “or.”

Lastly, the Department of Legal Affairs appears to be vested with authority to enforce the law. The proposed legislation states explicitly that it does not create a private right of action, although it also says that it is in addition to any other remedies provided by law.

The proposed legislation is part of an anticipated wave of privacy legislation under consideration across the country. California’s CCPA took effect in January and imposes significant obligations on covered businesses. Last year, Nevada passed privacy legislation that bears a striking resemblance to the proposed Florida legislation. Other privacy legislation has been proposed in Massachusetts and other jurisdictions.


©2011-2020 Carlton Fields, P.A.

For more on new and developing legislation in Florida and elsewhere, see the National Law Review Election Law & Legislative News section.

2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

Employee Video Surveillance: Position of the European Court of Human Rights

On October 17, 2019, the European Court of Human Rights (ECHR) approved the installation of a Closed-Circuit Television (“CCTV”) surveillance system which was used to monitor supermarket cashiers without informing those employees of the fact that it had been installed.

In this case, a Spanish supermarket manager decided to install cameras in the supermarket because of suspected thefts. He installed (i) visible cameras pointing at the supermarket’s entrance and exit of which he had informed the staff and (ii) hidden cameras pointing at the cash registers of which neither employees nor staff representatives had been informed.

The hidden cameras revealed that thefts were being committed by several employees at the cash registers. The concerned employees were dismissed. Some of them brought an action before the Spanish Labor court arguing that the use of CCTV without their prior knowledge was a breach to their right to privacy and that such evidence could not be admitted in the dismissal procedure.

Like French law, Spanish law requires the person responsible for a CCTV system to inform the concerned employees of the existence, purpose, and methods of the collection of their personal data, prior to implementation of the system.

The case was brought before the ECHR, which gave a first decision on January 9, 2018, concluding that Article 8 of the European Convention for the Protection of Human Rights, relating to the right to privacy, had been breached. The case was then referred to the Grand Chamber.

The issue raised was to find the proportionality and the balance between (i) the reasons justifying the implementation of a CCTV system (i.e., the right of the employer to ensure the protection of its property and the proper functioning of its business) and (ii) the employees’ right to privacy.

The ECHR stated that “domestic courts must ensure that the introduction by an employer of surveillance measures that infringe the employees’ privacy rights is proportionate and is implemented with adequate and sufficient safeguards against abuse”, referring to its previous case law [1].

The ECHR considered that in order to ensure the proportionality of CCTV measures in the workplace, domestic courts should take into account the following factors when balancing the interests involved:

  1. Has the employee been informed of the possibility of being subject to a video surveillance measure?
  2. What is the extent of the video surveillance and what is the degree of intrusion into the employee’s private life?
  3. Has the use of video surveillance been justified by the employer on legitimate grounds?
  4. Was there an alternative surveillance system based on less intrusive means and measures available to the employer?
  5. What were the consequences of the surveillance for the employee who was subject to it?
  6. Was the employee concerned by the video surveillance measure offered adequate guarantees?

Therefore, prior notification to the employees is only one of the criteria taken into account in the balance of interests.

In this particular case, the ECHR approved the examination of proportionality of the video surveillance measure. The Judges decided that despite the lack of prior notification to the employees, the CCTV was (i) justified by suspicions of theft, (ii) limited in space (only a few checkout counters), and (iii) limited in time (10 days). The Court also noted that very few people watched the recordings and then concluded that the degree of intrusion into the employees’ privacy was limited.

Consequently, the Grand Chamber considered that there was no violation of the employees’ privacy rights.

Although this decision does not directly concern France, it remains very interesting since French regulations (i.e., the Data Protection Act, the General Data Protection Regulations, and the Labor Code) provide:

  • that the monitoring measures implemented by an employer must not impose restrictions on the employees’ rights and freedoms which would neither be proportionate nor justified by the nature of the task to be performed (Article L. 1121-1 of the Labor Code); and
  • that concerned employees and staff representatives must be informed prior to the implementation of a video surveillance system (Article L. 1222-4 of the Labor Code).

According to French case law, any system that is not compliant with the above is considered illicit and the information collected could not be used as evidence of an employee’s misconduct [2].

The ECHR’s decision seems to challenge French case law: where the absence of prior notification to employees is considered as an overwhelming obstacle by French judges, the ECHR considers that it is merely one of the several criteria to be taken into account to assess the proportionality of the infringement to the employee’s right to privacy.

The question that remains is: what will be the impact of the ECHR’s decision in France?


NOTES

[1] ECHR, Grand Chamber, September 5, 2017, n°641996/08, Bărbulescu c. Roumanie; ECHR, decision, October 5, 2010, 420/07, Köpke c. Germany.

[2] See French Supreme Court, June 7, 2006, n°04-43866 ; French Supreme Court, September 20, 2018, n°16-26482.


Copyright 2019 K & L Gates

ARTICLE BY Christine Artus of K&L Gates.
For more on employee privacy rights, see the National Law Review Labor & Employment Law section.

Facing Facts: Do We Sacrifice Security Out of Fear?

Long before the dawn of time, humans displayed physical characteristics as identification tools. Animals do the same to distinguish each other. Crows use facial recognition on humans.  Even plants can tell their siblings from unrelated plants of the same species.

We present our physical forms to the world, and different traits identify us to anyone who is paying attention. So why, now that identity theft is rampant and security is challenged, do we place limits on the easiest and best ID system available? Are we sacrificing future security due to fear of an unlikely dystopia?

In one of the latest cases rolling out of Illinois’ private right of action under the state’s Biometric Information Privacy Act (BIPA), Rogers v. BNSF Railway Company[1], the court ruled that a railroad hauling hazardous chemicals through major urban areas needed to change, and probably diminish, its security procedures for who it allows into restricted space. Why? Because the railroad used biometric security to identify authorized entrants, BIPA forces the railroad to receive the consent of each person authorized to enter restricted space, and because BIPA is not preempted by federal rail security regulations.

The court’s decision, based on the fact that federal rail security rules do not specifically regulate biometrics, is a reasonable reading of the law. However, with BIPA not providing exceptions for biometric security, BIPA will impede the adoption and effectiveness of biometric-based security systems, and force some businesses to settle for weaker security. This case illustrates how BIPA reduces security in our most vulnerable and dangerous places.

I can understand some of the reasons Illinois, Texas, Washington and others want to restrict the unchecked use of biometrics. Gathering physical traits – even public traits like faces and voices – into large searchable databases can lead to overreaching by businesses. The company holding the biometric database may run tests and make decisions based on physical properties.  If your voice shows signs of strain, maybe the price of your insurance should rise to cover risk that stress puts on your body. But this kind of concern can be addressed by regulating what can be done with biometric readings.

There are also some concerns that may not have the foundation they once had. Two decades ago, many biometric systems stored bio data as direct copies, so that if someone stole the file, that person would have your fingerprint, voiceprint or iris scan.  Now, nearly all of the better biometric systems store bio readings as algorithms that can’t be read by computers outside the system that took the sample. So some of the safety concerns are no longer valid.

I propose a more nuanced thinking about biometric readings. While requiring data subject consent is harmless in many situations, the consent regime is a problem for security systems that use biometric indications of identity. And these systems are generally the best for securing important spaces.  Despite what you see in the movies, 2019 biometric security systems can be nearly impossible to trick into false positive results. If we want to improve our security for critical infrastructure, we should be encouraging biometrics, not throwing hurdles in the path of people choosing to use it.

Illinois should, at the very least, provide an exception to BIPA for physical security systems, even if that exception is limited to critical facilities like nuclear, rail and hazardous shipping restricted spaces. The state can include limits on how the biometric samples are used by the companies taking them, so that only security needs are served.

The field of biometrics may scare some people, but it is a natural outgrowth of how humans have always told each other apart.  If limit its use for critical security, we are likely to suffer from the decision.

[1] 2019 WL 5699910 (N.D. Ill).


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on biometric identifier privacy, see the National Law Review Communications, Media & Internet law page.

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.

China’s TikTok Facing Privacy & Security Scrutiny from U.S. Regulators, Lawmakers

Perhaps it is a welcome reprieve for Facebook, Google and YouTube. A competing video-sharing social media company based in China has drawn the attention of U.S. privacy officials and lawmakers, with a confidential investigation under way and public hearings taking place on Capitol Hill.

Reuters broke the story that the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) is conducting a national security review of the owners of TikTok, a social media video-sharing platform that claims a young but formidable U.S. audience of 26.5 million users. CFIUS is engaged in the context of TikTok owner ByteDance Technology Co.’s $1 billion acquisition of U.S. social media app Musical.ly two years ago, a deal ByteDance did not present to the agency for review.

Meanwhile, U.S. legislators are concerned about censorship of political content, such as coverage of protests in Hong Kong, and the location and security of personal data the company stores on U.S. citizens.

Sen. Josh Hawley (R-Mo.), Chairman of the Judiciary Committee’s Subcommittee on Crime and Terrorism, invited TikTok and others to testify in Washington this week for hearings titled “How Corporations and Big Tech Leave Our Data Exposed to Criminals, China, and Other Bad Actors.”

While TikTok did not send anyone to testify, the company’s recently appointed General Manager for North America and Australia Vanessa Pappas, formerly with YouTube, sent a letter indicating that it did not store data on U.S. citizens in China. She explained in an open letter on the TikTok website, which reads similarly to that reportedly sent to the subcommittee, that the company is very much aware of its privacy obligations and U.S. regulations and is taking a number of measures to address its obligations.

For nearly eight years Pappas served as Global Head of Creative Insights and before that Audience Development for YouTube. In late 2018 she was strategic advisor to ByteDance, and in January 2019 became TikTok’s U.S. General Manager. In July her territory expanded to North America and Australia. Selecting someone who played such a leadership position for YouTube, widely used and familiar to Americans, to lead U.S. operations may serve calm the nerves of U.S. regulators. But given U.S. tensions with China over trade, security and intellectual property, TikTok and Pappas have a way to go.

Some commentators think Facebook must enjoy watching TikTok getting its turn in the spotlight, especially since TikTok is a growing competitor to Facebook in the younger market. If just briefly, it may divert attention away from the attention being paid globally to the social media giant’s privacy and data collection practices, and the many fines.

It’s clear that TikTok has Facebook’s attention. TikTok, which allows users to create and share short videos with special effects, did a great deal of advertising on Facebook. The ads were clearly targeting the teen demographic and were apparently successful. CEO Mark Zuckerberg recently said in a speech that mentions of the Hong Kong protests were censored in TikTok feeds in China and to the United States, something TikTok denied. In a case of unfortunate timing, Zuckerberg this week posted that 100 or so software developers may have improperly accessed Facebook user data.

Since TikTok is largely a short-video sharing application, it competes at some level with YouTube in the youth market. In the third quarter of 2019, 81 percent of U.S. internet users aged 15 to 25 accessed YouTube, according to figures collected by Statista. YouTube boasts more than 126 million monthly active users in the U.S., 100 million more than TikTok.

Potential counterintelligence ‘we cannot ignore’

Last month, U.S. Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR) asked Acting Director of National Intelligence to conduct a national security probe of TikTok and other Chinese companies. Expressing concern about the collection of user data, whether the Chinese government censors content feeds to the U.S., as Zuckerberg suggested, and whether foreign influencers were using TikTok to advance their objectives.

“With over 110 million downloads in the U.S. alone,” the Schumer and Cotton letter read, “TikTok is a potential counterintelligence threat we cannot ignore. Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the U.S. and brief Congress on these findings.” They must be happy with Sen. Hawley’s hearings.

In her statement, TikTok GM Pappas offered the following assurances:

  • U.S. user data is stored in the United States with backup in Singapore — not China.
  • TikTok’s U.S. team does what’s best for the U.S. market, with “the independence to do so.”
  • The company is committed to operating with greater transparency.
  • California-based employees lead TikTok’s moderation efforts for the U.S.
  • TikTok uses machine learning tools and human content reviews.
  • Moderators review content for adherence to U.S. laws.
  • TikTok has a dedicated team focused on cybersecurity and privacy policies.
  • The company conducts internal and external reviews of its security practices.
  • TikTok is forming a committee of users to serve them responsibly.
  • The company has banned political advertising.

Both TikToc and YouTube have been stung by failing to follow the rules when it comes to the youth and children’s market. In February, TikTok agreed to pay $5.7 million to settle the FTC’s case which allege that, through the Musical.ly app, TikTok company illegally collected personal information from children. At the time it was the largest civil penalty ever obtained by the FTC in a case brought under the Children’s Online Privacy Protection Act (COPPA). The law requires websites and online services directed at children obtain parental consent before collecting personal information from kids under 13. That record was smashed in September, though, when Google and its YouTube subsidiary agreed to pay $170 million to settle allegations brought by the FTC and the New York Attorney General that YouTube was also collecting personal information from children without parental consent. The settlement required Google and YouTube to pay $136 million to the FTC and $34 million to New York.

Quality degrades when near-monopolies exist

What I am watching for here is whether (and how) TikTok and other social media platforms respond to these scandals by competing on privacy.

For example, in its early years Facebook lured users with the promise of privacy. It was eventually successful in defeating competitors that offered little in the way of privacy, such as MySpace, which fell from a high of 75.9 million users to 8 million today. But as Facebook developed a dominant position in social media through acquisition of competitors like Instagram or by amassing data, the quality of its privacy protections degraded. This is to be expected where near-monopolies exist and anticompetitive mergers are allowed to close.

Now perhaps the pendulum is swinging back. As privacy regulation and publicity around privacy transgressions increase, competitive forces may come back into play, forcing social media platforms to compete on the quality of their consumer privacy protections once again. That would be a great development for consumers.

 


© MoginRubin LLP

ARTICLE BY Jennifer M. Oliver of MoginRubin.
Edited by Tom Hagy for MoginRubin LLP.
For more on social media app privacy concerns, see the National Law Review Communications, Media & Internet law page.

CCPA Alert: California Attorney General Releases Draft Regulations

On October 10, 2019, the California Attorney General released the highly anticipated draft regulations for the California Consumer Privacy Act (CCPA). The regulations focus heavily on three main areas: 1) notices to consumers, 2) consumer requests and 3) verification requirements. While the regulations focus heavily on these three topics, they also discuss special rules for minors, non-discrimination standards and other aspects of the CCPA. Despite high hopes, the regulations do not provide the clarity many companies desired. Instead, the regulations layer on new requirements while sprinkling in further ambiguities.

The most surprising new requirements proposed in the regulations include:

  • New disclosure requirements for businesses that collect personal information from more than 4,000,000 consumers
  • Businesses must acknowledge the receipt of consumer requests within 10 days
  • Businesses must honor “Do Not Sell” requests within 15 days and inform any third parties who received the personal information of the request within 90 days
  • Businesses must obtain consumer consent to use personal information for a use not disclosed at the time of collection

The following are additional highlights from each of the three main areas:

1. Notices to consumers

The regulations discuss four types of notices to consumers: notice at the time of collection, notice of the right to opt-out of the sale of personal information, notice of financial incentives and a privacy policy. All required notices must be:

  • Easy to read in plain, straightforward language
  • In a format that draws the consumer’s attention to the notice
  • Accessible to those with disabilities
  • Available in all languages in which the company regularly conducts business

The regulations make clear that it is necessary, but not sufficient, to update your privacy policy to be compliant with CCPA. You must also provide notice to consumers at the time of data collection, which must be visible and accessible before any personal information is collected. The regulations make clear that no personal information may be collected without proper notice. You may use your privacy policy as the notice at the time of collection, but you must link to a specific section of your privacy policy that provides the statutorily required notice.

The regulations specifically provide that for offline collection, businesses could provide a paper version of the notice or post prominent signage. Similar to General Data Protection Regulation (GDPR), a company may only use personal information for the purposes identified at the time of collection. Otherwise, the business must obtain explicit consent to use the personal information for a new purpose.

In addition to the privacy policy requirements in the statute itself, the regulations require more privacy policy disclosures. For example, the business must include instructions on how to verify a consumer request and how to exercise consumer rights through an agent. Further, the privacy policy must identify the following information for each category of personal information collected: the sources of the information, how the information is used and the categories of third parties to whom the information is disclosed. For businesses that collect personal information of 4,000,000 or more consumers, the regulations require additional disclosures related to the number of consumer requests and the average response times. Given the additional nuances of the disclosure requirements, we recommend working with counsel to develop your privacy policy.

If a business provides financial incentives to a consumer for allowing the sale of their personal information, then the business must provide a notice of the financial incentive. The notice must include a description of the incentive, its material terms, instructions on how to opt-in to the incentive, how to withdraw from the incentive and an explanation of why the incentive is permitted by CCPA.

Finally, the regulations state that service providers that collect personal information on behalf of a business may not use that personal information for their own purposes. Instead, they are limited to performing only their obligations under the contract between the business and service provider. The contract between the parties must also include the provisions described in CCPA to ensure that the relationship is a service provider/business relationship, and not a sale of personal information between a business and third party.

2. Consumer requests

Businesses must provide at least two methods for consumers to submit requests (most commonly an online form and a toll-free number), and one of the methods must reflect the manner in which the business primarily interacts with the consumer. In addition, businesses that substantially interact with consumers offline must provide an offline method for consumers to exercise their right to opt-out, such as providing a paper form. The regulations specifically call out that in-person retailers may therefore need three methods: a paper form, an online form and a toll-free number.

The regulations do limit some consumer request rights by prohibiting the disclosure of Social Security numbers, driver’s license numbers, financial account numbers, medical-related identification numbers, passwords, and security questions and answers. Presumably, this is for two reasons: the individual should already know this information and most of these types of information are subject to exemptions from CCPA.

One of the most notable clarifications related to requests is that the 45-day timeline to respond to a consumer request includes any time required to verify the request. Additionally, the regulations introduce a new timeline requirement for consumer requests. Specifically, businesses must confirm receipt of a request within 10 days. Another new requirement is that businesses must respond to opt-out requests within 15 days and must inform all third parties to stop selling the consumer’s information within 90 days. Further, the regulations require that businesses maintain request records logs for 24 months.

3. Verification requirements

The most helpful guidance in the regulations relates to verification requests. The regulations provide that a more rigorous verification process should apply to more sensitive information. That is, businesses should not release sensitive information without being highly certain about the identity of the individual requesting the information. Businesses should, where possible, avoid collecting new personal information during the verification process and should instead rely on confirming information already in the business’ possession. Verification can be through a password-protected account provided that consumers re-authenticate themselves. For websites that provision accounts to users, requests must be made through that account. Matching two data points provided by the consumer with data points maintained by the business constitutes verification to a reasonable degree of certainty, and the matching of three data points constitutes a high degree of certainty.

The regulations also provide prescriptive steps of what to do in cases where an identity cannot be verified. For example, if a business cannot verify the identity of a person making a request for access, then the business may proceed as if the consumer requested disclosure of only the categories of personal information, as opposed to the content of such personal information. If a business cannot verify a request for deletion, then the business should treat the request as one to opt-out of the sale of personal information.

Next steps

These draft regulations add new wrinkles, and some clarity, to what is required for CCPA compliance. As we move closer to January 1, 2020 companies should continue to focus on preparing compliant disclosures and notices, finalizing their privacy policies and establishing procedures to handle consumer requests. Despite the need to press forward on compliance, the regulations are open to initial public comment until December 6, 2019, with a promise to finalize the regulations in the spring of 2020. We expect further clarity as these draft regulations go through the comment process and privacy professionals, attorneys, businesses and other stakeholders weigh in on their clarity and reasonableness.


Copyright © 2019 Godfrey & Kahn S.C.

For more on CCPA implementation, see the National Law Review Consumer Protection law page.