U.S. Health & Human Services – Office of Civil Rights Issued Guidance Regarding HIPAA Privacy and Novel Coronavirus

The Office of Civil Rights (OCR) last month provided guidance and a reminder to HIPAA covered entities and their business associates regarding the sharing of patient health information (PHI) under the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule during an outbreak or emergency situation such as what we are all facing right now with the Novel Coronavirus (2019-nCoV) outbreak.

The OCR guidance focused on sharing patient information in several areas, including: treatment, public health activities, disclosures to family, friends, and others involved in an individual’s care, and disclosures to prevent a serious and imminent threat.

The HIPAA Privacy Rule allows a covered entity to disclose PHI to the Center for Disease Control (CDC) or to state or local health departments that are authorized to collect or receive such information, for the purpose of preventing disease and protecting public health.  This would include disclosure to the CDC, and/or state or local health departments, of PHI as needed to report prospective cases of patients exposed to or suspected or confirmed to have Novel Coronavirus.

The OCR message in the guidance document is clear and it emphasized the balance between protecting the privacy of patient PHI and the appropriate uses and disclosures of such information to protect the public health. For more information and resources, see the HHS interactive decision tool which provides assistance to covered entities to determine how the Privacy Rule applies to disclosures of PHI in emergency situations.


Copyright © 2020 Robinson & Cole LLP. All rights reserved.

For more on HIPAA regulation, see the National Law Review Health Law & Managed Care section.

6 Months Until Brazil’s LGPD Takes Effect – Are You Ready?

In August 2018, Brazil took a significant step by passing comprehensive data protection legislation: the General Data Protection Law (Lei Geral de Proteção de Dados Pessoais – Law No. 13,709/2018, as amended) (LGPD). The substantive part of the legislation takes effect August 16, 2020, leaving fewer than six short months for companies to prepare.

While the LGPD is similar to the EU’s General Data Protection Regulation (GDPR) in many respects, there are key differences that companies must consider when building their compliance program, to be in line with the LGPD.

Application

The LGPD takes a broad, multi-sectoral approach, applying to both public and private organizations and businesses operating online and offline. The LGPD applies to any legal entity, regardless of their location in the world, that:

  • processes personal data in Brazil;
  • processes personal data that was collected in Brazil; or
  • processes personal data to offer or provide goods or services in Brazil.

Thus, like the GDPR, the LGPD has an extraterritorial impact. A business collecting or processing personal data need not be headquartered, or even have a physical presence, in Brazil for the LGPD to apply.

Enforcement and Penalties

After many debates and delays, the Brazilian Congress approved the creation of the National Data Protection Authority (ANPD), an entity linked to the executive branch of the Brazilian government, which will be tasked with LGPD enforcement and issuing guidance.

Violations of the LGPD may result in fines and other sanctions; however, the fine structure is more lenient than the GDPR’s. Under the LGPD, fines may be levied up to 2% of the Brazil-sourced income of the organization (which is considered any legal entity, its group or conglomerate), net of taxes, for the preceding fiscal year, limited to R$ 50,000,000.00 (app. $11 million), per infraction. There is also the possibility of a daily fine to compel the entity to cease violations. The LGPD assigns to ANPD the authority to apply sanctions and determine how the fines shall be calculated.

Legal Basis for Processing

Similar to the GDPR, an organization must have a valid basis for processing personal data. Personal data can only be processed if it meets one of the 10 requirements below:

  • with an individual’s consent;
  • when necessary to fulfill the legitimate interests of the organization or a third party, except when the individual’s fundamental rights and liberties outweigh the organization’s interest;
  • based on a contract with the individual;
  • to comply with a legal or regulatory obligation;
  • public administration and for judicial purposes;
  • for studies by research entities;
  • for the protection of life or physical safety of the individual or a third party;
  • by health professionals or by health entities for health care purposes; or
  • to protect an individual’s credit.

Sensitive personal information (race, ethnicity, health data, etc.) and children’s information may only be processed with the individual or a parent or legal guardian’s consent, as applicable, or as required by law or public administration.

Individual Rights

Brazilian residents have a number of rights over their personal data. Many of these rights are similar to those found in the GDPR, but the LGPD also introduces additional rights not included in the GDPR.

Established privacy rights, materially included in the GDPR

  • access to personal data
  • deletion of personal data processed with the consent of the individual
  • correction of incomplete, inaccurate, or out-of-date personal data
  • anonymization, blocking, or deletion of unnecessary or excessive data or personal data not processed in compliance with the LGPD
  • portability of personal data to another service or product provider
  • information about the possibility of denying consent and revoking consent

Additional rights provided by the LGPD

  • access to information about entities with whom the organization has shared the individual’s personal data
  • access to information on whether or not the organization holds particular data

Transferring Data Out of Brazil

Organizations may transfer personal data to other countries that provide an adequate level of data protection, although Brazil has not yet identified which countries it considers as providing an adequate level of protection. For all other transfers, organizations may not transfer personal data collected in Brazil out of the country unless the organization has a valid legal method for such transfers. There are two main ways organizations can transfer data internationally:

  • with the specific and express consent of the individual, which must be prior and separated from the other purposes and requisitions of consent;
  • through contractual instruments such as binding corporate rules and standard clauses, committing the organization to comply with the LGPD principles, individual rights, and the Brazilian data protection regime.

Governance & Oversight

In addition to the requirements above, under the LGPD, organizations must, in most circumstances:

  • Appoint an officer to “be in charge of the processing of data,” who, together with the organization, shall be jointly liable for remedying any damage, whether individually or collectively, in violation of the personal data protection legislation, caused by them (there is little specificity around the role or responsibility of the data processing officer; however, it is not mandatory for the officer to be located in Brazil);
  • Maintain a record of their processing activities;
  • Perform data protection impact assessments;
  • Design their products and services with privacy as a default;
  • Adopt security, technical, and administrative measures able to protect personal data from unauthorized access, as well as accidental or unlawful destruction, loss, alteration, communication (likely similar standards to those established under the Brazilian Internet Act); and
  • Notify government authorities and individuals in the case of a data breach.

Meeting these requirements will likely be a significant administrative burden for organizations, especially as they work to meet varying documentation and governance requirements between the GDPR, CCPA, and LGPD. This effort is made more complicated by the lack of clarity in some of the LGPD administrative requirements. For example, while the LGPD requires a record of processing, it does not delineate what should be included in the document, and while it establishes that privacy impact assessments should be carried out, it does not indicate when such assessments are required.

Final Thoughts

Given August 2020 is right around the corner, global organizations processing personal data from or in Brazil should consider immediately moving forward with a review of their current data protection program to identify and address any LPGD compliance gaps that exist. As privacy law changes and global compliance requirements are top of mind for many clients operating global operations, we will be sure to provide timely informational updates on the LGPD, and any ANPD guidance issued.

Greenberg Traurig is not licensed to practice law in Brazil and does not advise on Brazilian law. Specific LGPD questions and Brazilian legal compliance issues will be referred to lawyers licensed to practice law in Brazil.


©2020 Greenberg Traurig, LLP. All rights reserved.

For more privacy laws around the globe, see the National Law Review Communications, Media & Internet law section.

D.C. District Court Limits the HIPAA Privacy Rule Requirement for Covered Entities to Provide Access to Records

On January 23, 2020, the D.C. District Court narrowed an individual’s right to request that HIPAA covered entities furnish the individual’s own protected health information (“PHI”) to a third party at the individuals’ request, and removed the cap on the fee covered entities may charge to transmit that PHI to a third party.

Specifically the Court stated that individuals may only direct PHI in an electronic format to such third parties, and that HIPAA covered entities, and their business associates, are not subject to reasonable, and cost-based fees for PHI directed to third parties.

The HIPAA Privacy Rule grants individuals with rights to access their PHI in a designated record set, and it specifies the data formats and permissible fees that HIPAA covered entities (and their business associates) may charge for such production. See 45 C.F.R. § 164.524. When individuals request copies of their own PHI, the Privacy Rule permits a HIPAA covered entity (or its business associate) to charge a reasonable, cost-based fee, that excludes, for example, search and retrieval costs. See 45 C.F.R. § 164.524(c) (4). But, when an individual requests his or her own PHI to be sent to a third party, both the required format of that data (electronic or otherwise) and the fees that a covered entity may charge for that service have been the subject of additional OCR guidance over the years—guidance that the D.C. District Court has now, in part, vacated.

The Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act set a statutory cap on the fee that a covered entity may charge an individual for delivering records in an electronic form. 42 U.S.C. § 17935(e)(3). Then, in the 2013 Omnibus Rule, developed pursuant to Administrative Procedure Act rulemaking, the Department of Health and Human Services, Office for Civil Rights (“HHS OCR”) implemented the HITECH Act statutory fee cap in two ways. First, OCR determined that the fee cap applied regardless of the format of the PHI—electronic or otherwise. Second, OCR stated the fee cap also applied if the individual requested that a third party receive the PHI. 78 Fed. Reg. 5566, 5631 (Jan. 25, 2013). Finally, in its 2016 Guidance document on individual access rights, OCR provided additional information regarding these provisions of the HIPAA Privacy Rule. OCR’s FAQ on this topic is available here.

The D.C. District Court struck down OCR’s 2013 and 2016 implementation of the HITECH Act, in part. Specifically, OCR’s 2013 HIPAA Omnibus Final Rule compelling delivery of protected health information (PHI) to third parties regardless of the records’ format is arbitrary and capricious insofar as it goes beyond the statutory requirements set by Congress. That statute requires only that covered entities, upon an individual’s request, transmit PHI to a third party in electronic form. Additionally, OCR’s broadening of the fee limitation under 45 C.F.R. § 164.524(c)(4) in the 2016 Guidance document titled “Individuals’ Right under HIPAA to Access their Health Information 45 C.F.R. Sec. 164.524” violates the APA, because HHS did not follow the requisite notice and comment procedure.” Ciox Health, LLC v. Azar, et al., No. 18-cv0040 (D.D.C. January 23, 2020).

All other requirements for patient access remain the same, including required time frames for the provision of access to individuals, and to third parties designated by such individuals. It remains to be seen, however, how HHS will move forward after these developments from a litigation perspective and how this decision will affect other HHS priorities, such as interoperability and information blocking.


© Polsinelli PC, Polsinelli LLP in California

For more on HIPAA Regulation, see the National Law Review Health Law & Managed Care section.

The Shell Game Played with Your DNA, or 23 and Screwing Me

So you want to know how much Neanderthal is in your genes.

You are curious about the percentage of Serbo-Croatian, Hmong, Sephardim or Ashanti blood that runs through your veins. Or maybe you hope to find a rich great-aunt near death, seeking an heir.

How much is this worth to you?  Two hundred bucks? That makes sense.

But what about other costs:

– like sending your cousin to prison for life (and discovering that you grew up with a serial killer)?

– like all major companies refusing to insure you due to your genetic make-up?

— like ruining your family holidays when you find that your grandfather is not really genetically linked to you and grandma had been playing the field?

– like pharma companies making millions using your genetic code to create new drugs and not crediting you at all (not even with discounts on the drugs created by testing your cells)?

– like finding that your “de-identified” genetic code has been re-identified on the internet, exposing genetic propensity for alcoholism or birth defects that turn your fiancé’s parents against you?

How much are these costs worth to you?

According to former FDA commissioner Peter Pitts, writing in Forbes, “The [private DNA testing] industry’s rapid growth rests on a dangerous delusion that genetic data is kept private. Most people assume this sensitive information simply sits in a secure database, protected from hacks and misuse. Far from it. Genetic-testing companies cannot guarantee privacy. And many are actively selling user data to outside parties.” Including law enforcement.

Nothing in US Federal health law protects the privacy of DNA test subjects at “non-therapeutic” labs like Ancestry or 23andMe. Information gleaned from the DNA can be used for almost anything.  As Pitts said, “Imagine a political campaign exposing a rival’s elevated risk of Alzheimer’s. Or an employer refusing to hire someone because autism runs in her family. Imagine a world where people can have their genomic building blocks held against them. Such abuses represent a profound violation of privacy. That’s an inherent risk in current genetic-testing practices.”

Genetic testing companies quietly, and some would argue without adequate explanation of facts and harms which are lost in a thousand words of fine print that most data subjects won’t read, push their customers to allow genetic testing on the customer samples provided. Up to 80% of 23andMe customers consent to this activity, likely not knowing that the company plans to make money off the drugs developed from customer DNA. Federal laws require labs like those used by 23andMe for drug development to keep information for more than 10 years, so once they have it, despite rights to erasure provided by California and the EU, 23andMe can refuse to drop your data from its tests.

Go see the HBO movie starring Oprah Winfrey about medical exploitation of the cell lines of Henrietta Lacks, or better yet, read the bestselling book it was based on. Observe that an engaging, vivacious woman who couldn’t afford health insurance was farmed for a line of her cancer cells that assisted medical science for decades and made millions of dollars for pharma companies without any permission from or benefit to the woman whose cells were taken.  Or any benefit to her family once cancer killed her. Companies secured over 11,000 patents using her cell lines. This is the business model now adopted by 23andMe. Take your valuable data under the guise of providing information to you, but quietly turning that data into profitable products for their shareholders’ and executives’ benefit. Not to mention that 23andMe can change its policies at any time.

As part of selling your genetic secrets to the highest bidder, 23andMe is constantly pushing surveys out to its customers. According to an article in Wired, 23andMe Founder Ann Wojcicki said, “We specialize in capturing phenotypic data on people longitudinally—on average 300 data points on each customer. That’s the most valuable by far.” Which means they are selling not only your DNA information, but all the other data you give them about your family and lifestyle.

This deep ethical compromise by 23andMe is personal for me, and not because I have sent them any DNA samples – I haven’t and I never would. But because, when questioned publicly about their trustworthiness by me and others close to me, 23andMe has not tried to explain its policies, but has simply attacked the questioners in public. Methinks the amoral vultures doth protest too much.

For example, a couple of years ago, my friend, co-author and privacy expert Theresa Payton noted on a Fox News segment that people who provide DNA information to 23andMe do not know how such data will be used because the industry is not regulated and the company could change its policies any time. 23andMe was prompt and nasty in its response, attacking Ms. Payton on Twitter and probably elsewhere, claiming that the 23andMe privacy policy, as it existed at the time, was proof that no surprises could ever be in store for naïve consumers who gave their most intimate secrets to this company.

[BTW, for the inevitable attacks coming from 23andMe and their army of online protectors, the FTC endorsement guidelines require that if there is a material connection between you and 23andMe, paid or otherwise, you need to clearly and conspicuously disclose it.]

Clearly Ms. Payton was correct and 23andMe’s attacks on her were simply wrong.

Guess what? According to the Wall Street Journal, 23andMe sold a $300 MM stake in itself to GlaxoSmithKline recently and, “For 23andMe, using genetic data for drug research ‘was always part of the vision,’ according to Emily Drabant Conley, vice president and head of business development.” So this sneaky path is not even a new tactic. According to the same WSJ story, “23andMe has long wanted to use genetic data for drug development. Initially, it shared its data with drug makers including Pfizer Inc. and Roche Holding AG ’s Genentech but wasn’t involved in subsequent drug discovery. It later set up its own research unit but found it lacked the scale required to build a pipeline of medicines. Its partnership with Glaxo is now accelerating those efforts.”

And now 23andMe has licensed an antibody it developed to treat inflammatory diseases to Spanish drug maker Almirall SA. “This is a seminal moment for 23andMe,” said Conley. “We’ve now gone from database to discovery to developing a drug.” In the WSJ, Arthur Caplan, a professor of bioethics at NYU School of Medicine said “You get this gigantic valuable treasure chest, and people are going to wind up paying for it twice. All the people who sent in DNA will be paying the same price for any drugs that are developed as anybody else.”

So this adds another ironic dimension to the old television adage, “You aren’t the customer, you are the product.” You pay to provide your DNA – the code to your entire physical existence – to a private company. Why? Possibly because you want information that may affect your healthcare, but in all likelihood you simply intend to use the information for general entertainment and information purposes.

You likely send a swab to the DNA company because you want to learn your ethnic heritage and/or see what interesting things they can tell you about why you have a photic sneeze reflex, if you are genetically inclined to react strongly to caffeine, or if you are carrier of a loathsome disease (which you could learn for an additional fee). But the company uses the physical cells from your body not only to build databases of commercially valuable information, but to develop drugs and sell them to the pharmaceutical industry. So who is the DNA company’s customer? 23andMe and its competitors take physical specimens from you and sell products made from those specimens to their real customers, the drug companies and the data aggregators.

These DNA processing firms may be the tip of the spear, because huge data companies are coming for your health information. According to the Wall Street Journal,

“Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.”

And medical companies are now tracking patient information with wearables like smartwatches, so that personally captured daily health data is now making its way into these databases.

And, of course, other risk issues affect the people who provide data to such services.  We know through reporting following the capture of the Golden State Killer that certain genetic testing labs (like GEDMatch) have been more free than others with sharing customer DNA with law enforcement without asking for warrants, subpoenas or court orders, and that such data can not only implicate the DNA contributors but their entire families as well. In addition, while DNA testing companies claim to only sell anonymized data, the information may not remain that way.

Linda Avey, co-founder of 23andMe, concedes that nothing is foolproof. She told an online magazine, “It’s a fallacy to think that genomic data can be fully anonymized.” This articles showed that researchers have already re-identified people from their publicly available genomic data. For example, one 2013 study matched Y-chromosome data with names posted in places such as genealogy sites. In another study that same year, Harvard Professor Latanya Sweeney re-identified 84 to 97 percent of a sample of Personal Genome Project volunteers by comparing gender, postal code and date of birth with public records.

2015 study re-identified nearly a quarter of a sample of users sequenced by 23andMe who had posted their information to the sharing site openSNP. “The matching risk will continuously increase with the progress of genomic knowledge, which raises serious questions about the genomic privacy of participants in genomic datasets,” concludes the paper in Proceedings on Privacy Enhancing Technologies. “We should also recall that, once an individual’s genomic data is identified, the genomic privacy of all his close family members is also potentially threatened.” DNA data is the ultimate genie, that once released from the bottle, can’t be changed, shielded or stuffed back inside, and that threatens both the data subject and her entire family for generations.

And let us not forget the most basic risk involved in gathering important data. This article has focused on how 23andMe and other private DNA companies have chosen to use the data – probably in ways that their DNA contributing customers did not truly understand – to turn a profit for investors.  But collecting such data could have unintended consequences.  It can be lost to hackers, spies or others who might steal it for their own purposes.  It can be exposed in government investigations through subpoenas or court orders that a company is incapable of resisting.

So people planning to plaster their deepest internal and family secrets into private company databases should consider the risks that the private DNA mills don’t want you to think about.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in health data privacy, see the National Law Review Health Law & Managed Care section.

My Business Is In Arizona, Why Do I Care About California Privacy Laws? How the CCPA Impacts Arizona Businesses

Arizona businesses are not typically concerned about complying with the newest California laws going into effect. However, one California law in particular—the CCPA or California Consumer Privacy Act—has a scope that extends far beyond California’s border with Arizona. Indeed, businesses all over the world that have customers or operations in California must now be mindful of whether the CCPA applies to them and, if so, whether they are in compliance.

What is the CCPA?

The CCPA is a comprehensive data privacy regulation enacted by the California Legislature that became effective on January 1, 2020. It was passed on September 13, 2018 and has undergone a series of substantive amendments over the past year and a few months.

Generally, the CCPA gives California consumers a series of rights with respect to how companies acquire, store, use, and sell their personal data. The CCPA’s combination of mandatory disclosures and notices, rights of access, rights of deletion, statutory fines, and threat of civil lawsuits is a significant move towards empowering consumers to control their personal data.

Many California businesses are scrambling to implement the necessary policies and procedures to comply with the CCPA in 2020. In fact, you may have begun to notice privacy notices on the primary landing page for national businesses. However, Arizona businesses cannot assume that the CCPA stops at the Arizona border.

Does the CCPA apply to my business in Arizona?

The CCPA has specific criteria for whether a company is considered a California business. The CCPA applies to for-profit businesses “doing business in the State of California” that also:

  • Have annual gross revenues in excess of twenty-five million dollars; or
  • Handle data of more than 50,000 California consumers or devices per year; or
  • Have 50% or more of revenue generated by selling California consumers’ personal information

The CCPA does not include an express definition of what it means to be “doing business” in California. While it will take courts some time to interpret the scope of the CCPA, any business with significant sales, employees, property, or operations in California should consider whether the CCPA might apply to them.

How do I know if I am collecting a consumer’s personal information?

“Personal information” under the CCPA generally includes any information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked” with a specific consumer. As the legalese of this definition implies, “personal information” includes a wide swath of data that your company may already be collecting about consumers.

There is no doubt that personal identifiers like name, address, email addresses, social security numbers, etc. are personal information. But information like biometric data, search and browsing activity, IP addresses, purchase history, and professional or employment-related information are all expressly included under the CCPA’s definition. Moreover, the broad nature of the CCPA means that other categories of data collected—although not expressly identified by the CCPA—may be deemed to be “personal information” in an enforcement action.

What can I do to comply with the CCPA?

If the CCPA might apply to your company, now is the time to take action. Compliance will necessarily be different for each business depending on the nature of its operation and the use(s) of personal information. However, there are some common steps that each company can take.

The first step towards compliance with the CCPA is understanding what data your company collects, how it is stored, whether it is transferred or sold, and whether any vendors or subsidiaries also have access to the data. Next, an organization should prepare a privacy notice that complies with the CCPA to post on its website and include in its app interface.

The most substantial step in complying with the CCPA is to develop and implement policies and procedures that help the company conform to the various provisions of the CCPA. The policies will need to provide up-front disclosures to consumers, allow consumers to opt-out, handle consumer requests to produce or delete personal information, and guard against any perceived discrimination against consumers that exercise rights under the CCPA.

The company will also need to review contracts with third-party service providers and vendors to ensure it can comply with the CCPA. For example, if a third-party cloud service will be storing personal information, the company will want to verify that its contract allows it to assemble and produce that information within statutory deadlines if requested by a consumer.

At least you have some time!

The good news is that the CCPA includes a grace period until July 1, 2020 before the California Attorney General can bring enforcement actions. Thus, Arizona businesses that may have ignored the quirky California privacy law to this point have a window to bring their operations into compliance. However, Arizona companies that may need to comply with the CCPA should consult with counsel as soon as possible to begin the process. The attorneys at Ryley Carlock & Applewhite are ready to help you analyze your risk and comply with the CCPA.


Copyright © 2020 Ryley Carlock & Applewhite. A Professional Association. All Rights Reserved.

Learn more about the California Consumer Privacy Act (CCPA) on the National Law Review Communications, Media & Internet Law page.

Florida’s Legislature to Consider Consumer Data Privacy Bill Akin to California’s CCPA

Florida lawmakers have proposed data privacy legislation that, if adopted, would impose significant new obligations on companies offering a website or online service to Florida residents, including allowing consumers to “opt out” of the sale of their personal information. While the bill (SB 1670 and HB 963) does not go as far as did the recent California Consumer Privacy Act, its adoption would mark a significant increase in Florida residents’ privacy rights. Companies that have an online presence in Florida should study the proposed legislation carefully. Our initial take on the proposed legislation appears below.

The proposed legislation requires an “operator” of a website or online service to provide consumers with (i) a “notice” regarding the personal information collected from consumers on the operator’s website or through the service and (ii) an opportunity to “opt out” of the sale of certain of a consumer’s personal information, known as “covered information” in the draft statute.

The “notice” would need to include several items. Most importantly, the operator would have to disclose “the categories of covered information that the operator collects through its website or online service about consumers who use [them] … and the categories of third parties with whom the operator may share such covered information.” The notice would also have to disclose “a description of the process, if applicable, for a consumer who uses or visits the website or online service to review and request changes to any of his or her covered information. . . .” The bill does not otherwise list when this “process” would be “applicable,” and it nowhere else appears to create for consumers any right to review and request changes.

While the draft legislation obligates operators to stop selling data of a consumer who submits a verified request to do so, it does not appear to require a description of those rights in the “notice.” That may just be an oversight in drafting. In any event, the bill is notable as it would be the first Florida law to require an online privacy notice. Further, a “sale” is defined as an exchange of covered information “for monetary consideration,” which is narrower than its CCPA counterpart, and contains exceptions for disclosures to an entity that merely processes information for the operator.

There are also significant questions about which entities would be subject to the proposed law. An “operator” is defined as a person who owns or operates a website or online service for commercial purposes, collects and maintains covered information from Florida residents, and purposefully directs activities toward the state. That “and” is assumed, as the proposed bill does not state whether those three requirements are conjunctive or disjunctive.

Excluded from the definition of “operator” is a financial institution (such as a bank or insurance company) already subject to the Gramm-Leach-Bliley Act, and an entity subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Outside of the definition of “operator,” the proposed legislation appears to further restrict the companies to which it would apply, to eliminate its application to smaller companies based in Florida, described as entities “located in this state,” whose “revenue is derived primarily from a source other than the sale or lease of goods, services, or credit on websites or online services,” and “whose website or online service has fewer than 20,000 unique visitors per year.” Again, that “and” is assumed as the bill does not specify “and” or “or.”

Lastly, the Department of Legal Affairs appears to be vested with authority to enforce the law. The proposed legislation states explicitly that it does not create a private right of action, although it also says that it is in addition to any other remedies provided by law.

The proposed legislation is part of an anticipated wave of privacy legislation under consideration across the country. California’s CCPA took effect in January and imposes significant obligations on covered businesses. Last year, Nevada passed privacy legislation that bears a striking resemblance to the proposed Florida legislation. Other privacy legislation has been proposed in Massachusetts and other jurisdictions.


©2011-2020 Carlton Fields, P.A.

For more on new and developing legislation in Florida and elsewhere, see the National Law Review Election Law & Legislative News section.

2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

Employee Video Surveillance: Position of the European Court of Human Rights

On October 17, 2019, the European Court of Human Rights (ECHR) approved the installation of a Closed-Circuit Television (“CCTV”) surveillance system which was used to monitor supermarket cashiers without informing those employees of the fact that it had been installed.

In this case, a Spanish supermarket manager decided to install cameras in the supermarket because of suspected thefts. He installed (i) visible cameras pointing at the supermarket’s entrance and exit of which he had informed the staff and (ii) hidden cameras pointing at the cash registers of which neither employees nor staff representatives had been informed.

The hidden cameras revealed that thefts were being committed by several employees at the cash registers. The concerned employees were dismissed. Some of them brought an action before the Spanish Labor court arguing that the use of CCTV without their prior knowledge was a breach to their right to privacy and that such evidence could not be admitted in the dismissal procedure.

Like French law, Spanish law requires the person responsible for a CCTV system to inform the concerned employees of the existence, purpose, and methods of the collection of their personal data, prior to implementation of the system.

The case was brought before the ECHR, which gave a first decision on January 9, 2018, concluding that Article 8 of the European Convention for the Protection of Human Rights, relating to the right to privacy, had been breached. The case was then referred to the Grand Chamber.

The issue raised was to find the proportionality and the balance between (i) the reasons justifying the implementation of a CCTV system (i.e., the right of the employer to ensure the protection of its property and the proper functioning of its business) and (ii) the employees’ right to privacy.

The ECHR stated that “domestic courts must ensure that the introduction by an employer of surveillance measures that infringe the employees’ privacy rights is proportionate and is implemented with adequate and sufficient safeguards against abuse”, referring to its previous case law [1].

The ECHR considered that in order to ensure the proportionality of CCTV measures in the workplace, domestic courts should take into account the following factors when balancing the interests involved:

  1. Has the employee been informed of the possibility of being subject to a video surveillance measure?
  2. What is the extent of the video surveillance and what is the degree of intrusion into the employee’s private life?
  3. Has the use of video surveillance been justified by the employer on legitimate grounds?
  4. Was there an alternative surveillance system based on less intrusive means and measures available to the employer?
  5. What were the consequences of the surveillance for the employee who was subject to it?
  6. Was the employee concerned by the video surveillance measure offered adequate guarantees?

Therefore, prior notification to the employees is only one of the criteria taken into account in the balance of interests.

In this particular case, the ECHR approved the examination of proportionality of the video surveillance measure. The Judges decided that despite the lack of prior notification to the employees, the CCTV was (i) justified by suspicions of theft, (ii) limited in space (only a few checkout counters), and (iii) limited in time (10 days). The Court also noted that very few people watched the recordings and then concluded that the degree of intrusion into the employees’ privacy was limited.

Consequently, the Grand Chamber considered that there was no violation of the employees’ privacy rights.

Although this decision does not directly concern France, it remains very interesting since French regulations (i.e., the Data Protection Act, the General Data Protection Regulations, and the Labor Code) provide:

  • that the monitoring measures implemented by an employer must not impose restrictions on the employees’ rights and freedoms which would neither be proportionate nor justified by the nature of the task to be performed (Article L. 1121-1 of the Labor Code); and
  • that concerned employees and staff representatives must be informed prior to the implementation of a video surveillance system (Article L. 1222-4 of the Labor Code).

According to French case law, any system that is not compliant with the above is considered illicit and the information collected could not be used as evidence of an employee’s misconduct [2].

The ECHR’s decision seems to challenge French case law: where the absence of prior notification to employees is considered as an overwhelming obstacle by French judges, the ECHR considers that it is merely one of the several criteria to be taken into account to assess the proportionality of the infringement to the employee’s right to privacy.

The question that remains is: what will be the impact of the ECHR’s decision in France?


NOTES

[1] ECHR, Grand Chamber, September 5, 2017, n°641996/08, Bărbulescu c. Roumanie; ECHR, decision, October 5, 2010, 420/07, Köpke c. Germany.

[2] See French Supreme Court, June 7, 2006, n°04-43866 ; French Supreme Court, September 20, 2018, n°16-26482.


Copyright 2019 K & L Gates

ARTICLE BY Christine Artus of K&L Gates.
For more on employee privacy rights, see the National Law Review Labor & Employment Law section.

Facing Facts: Do We Sacrifice Security Out of Fear?

Long before the dawn of time, humans displayed physical characteristics as identification tools. Animals do the same to distinguish each other. Crows use facial recognition on humans.  Even plants can tell their siblings from unrelated plants of the same species.

We present our physical forms to the world, and different traits identify us to anyone who is paying attention. So why, now that identity theft is rampant and security is challenged, do we place limits on the easiest and best ID system available? Are we sacrificing future security due to fear of an unlikely dystopia?

In one of the latest cases rolling out of Illinois’ private right of action under the state’s Biometric Information Privacy Act (BIPA), Rogers v. BNSF Railway Company[1], the court ruled that a railroad hauling hazardous chemicals through major urban areas needed to change, and probably diminish, its security procedures for who it allows into restricted space. Why? Because the railroad used biometric security to identify authorized entrants, BIPA forces the railroad to receive the consent of each person authorized to enter restricted space, and because BIPA is not preempted by federal rail security regulations.

The court’s decision, based on the fact that federal rail security rules do not specifically regulate biometrics, is a reasonable reading of the law. However, with BIPA not providing exceptions for biometric security, BIPA will impede the adoption and effectiveness of biometric-based security systems, and force some businesses to settle for weaker security. This case illustrates how BIPA reduces security in our most vulnerable and dangerous places.

I can understand some of the reasons Illinois, Texas, Washington and others want to restrict the unchecked use of biometrics. Gathering physical traits – even public traits like faces and voices – into large searchable databases can lead to overreaching by businesses. The company holding the biometric database may run tests and make decisions based on physical properties.  If your voice shows signs of strain, maybe the price of your insurance should rise to cover risk that stress puts on your body. But this kind of concern can be addressed by regulating what can be done with biometric readings.

There are also some concerns that may not have the foundation they once had. Two decades ago, many biometric systems stored bio data as direct copies, so that if someone stole the file, that person would have your fingerprint, voiceprint or iris scan.  Now, nearly all of the better biometric systems store bio readings as algorithms that can’t be read by computers outside the system that took the sample. So some of the safety concerns are no longer valid.

I propose a more nuanced thinking about biometric readings. While requiring data subject consent is harmless in many situations, the consent regime is a problem for security systems that use biometric indications of identity. And these systems are generally the best for securing important spaces.  Despite what you see in the movies, 2019 biometric security systems can be nearly impossible to trick into false positive results. If we want to improve our security for critical infrastructure, we should be encouraging biometrics, not throwing hurdles in the path of people choosing to use it.

Illinois should, at the very least, provide an exception to BIPA for physical security systems, even if that exception is limited to critical facilities like nuclear, rail and hazardous shipping restricted spaces. The state can include limits on how the biometric samples are used by the companies taking them, so that only security needs are served.

The field of biometrics may scare some people, but it is a natural outgrowth of how humans have always told each other apart.  If limit its use for critical security, we are likely to suffer from the decision.

[1] 2019 WL 5699910 (N.D. Ill).


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on biometric identifier privacy, see the National Law Review Communications, Media & Internet law page.

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.