U.S. Health & Human Services – Office of Civil Rights Issued Guidance Regarding HIPAA Privacy and Novel Coronavirus

The Office of Civil Rights (OCR) last month provided guidance and a reminder to HIPAA covered entities and their business associates regarding the sharing of patient health information (PHI) under the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule during an outbreak or emergency situation such as what we are all facing right now with the Novel Coronavirus (2019-nCoV) outbreak.

The OCR guidance focused on sharing patient information in several areas, including: treatment, public health activities, disclosures to family, friends, and others involved in an individual’s care, and disclosures to prevent a serious and imminent threat.

The HIPAA Privacy Rule allows a covered entity to disclose PHI to the Center for Disease Control (CDC) or to state or local health departments that are authorized to collect or receive such information, for the purpose of preventing disease and protecting public health.  This would include disclosure to the CDC, and/or state or local health departments, of PHI as needed to report prospective cases of patients exposed to or suspected or confirmed to have Novel Coronavirus.

The OCR message in the guidance document is clear and it emphasized the balance between protecting the privacy of patient PHI and the appropriate uses and disclosures of such information to protect the public health. For more information and resources, see the HHS interactive decision tool which provides assistance to covered entities to determine how the Privacy Rule applies to disclosures of PHI in emergency situations.


Copyright © 2020 Robinson & Cole LLP. All rights reserved.

For more on HIPAA regulation, see the National Law Review Health Law & Managed Care section.

6 Months Until Brazil’s LGPD Takes Effect – Are You Ready?

In August 2018, Brazil took a significant step by passing comprehensive data protection legislation: the General Data Protection Law (Lei Geral de Proteção de Dados Pessoais – Law No. 13,709/2018, as amended) (LGPD). The substantive part of the legislation takes effect August 16, 2020, leaving fewer than six short months for companies to prepare.

While the LGPD is similar to the EU’s General Data Protection Regulation (GDPR) in many respects, there are key differences that companies must consider when building their compliance program, to be in line with the LGPD.

Application

The LGPD takes a broad, multi-sectoral approach, applying to both public and private organizations and businesses operating online and offline. The LGPD applies to any legal entity, regardless of their location in the world, that:

  • processes personal data in Brazil;
  • processes personal data that was collected in Brazil; or
  • processes personal data to offer or provide goods or services in Brazil.

Thus, like the GDPR, the LGPD has an extraterritorial impact. A business collecting or processing personal data need not be headquartered, or even have a physical presence, in Brazil for the LGPD to apply.

Enforcement and Penalties

After many debates and delays, the Brazilian Congress approved the creation of the National Data Protection Authority (ANPD), an entity linked to the executive branch of the Brazilian government, which will be tasked with LGPD enforcement and issuing guidance.

Violations of the LGPD may result in fines and other sanctions; however, the fine structure is more lenient than the GDPR’s. Under the LGPD, fines may be levied up to 2% of the Brazil-sourced income of the organization (which is considered any legal entity, its group or conglomerate), net of taxes, for the preceding fiscal year, limited to R$ 50,000,000.00 (app. $11 million), per infraction. There is also the possibility of a daily fine to compel the entity to cease violations. The LGPD assigns to ANPD the authority to apply sanctions and determine how the fines shall be calculated.

Legal Basis for Processing

Similar to the GDPR, an organization must have a valid basis for processing personal data. Personal data can only be processed if it meets one of the 10 requirements below:

  • with an individual’s consent;
  • when necessary to fulfill the legitimate interests of the organization or a third party, except when the individual’s fundamental rights and liberties outweigh the organization’s interest;
  • based on a contract with the individual;
  • to comply with a legal or regulatory obligation;
  • public administration and for judicial purposes;
  • for studies by research entities;
  • for the protection of life or physical safety of the individual or a third party;
  • by health professionals or by health entities for health care purposes; or
  • to protect an individual’s credit.

Sensitive personal information (race, ethnicity, health data, etc.) and children’s information may only be processed with the individual or a parent or legal guardian’s consent, as applicable, or as required by law or public administration.

Individual Rights

Brazilian residents have a number of rights over their personal data. Many of these rights are similar to those found in the GDPR, but the LGPD also introduces additional rights not included in the GDPR.

Established privacy rights, materially included in the GDPR

  • access to personal data
  • deletion of personal data processed with the consent of the individual
  • correction of incomplete, inaccurate, or out-of-date personal data
  • anonymization, blocking, or deletion of unnecessary or excessive data or personal data not processed in compliance with the LGPD
  • portability of personal data to another service or product provider
  • information about the possibility of denying consent and revoking consent

Additional rights provided by the LGPD

  • access to information about entities with whom the organization has shared the individual’s personal data
  • access to information on whether or not the organization holds particular data

Transferring Data Out of Brazil

Organizations may transfer personal data to other countries that provide an adequate level of data protection, although Brazil has not yet identified which countries it considers as providing an adequate level of protection. For all other transfers, organizations may not transfer personal data collected in Brazil out of the country unless the organization has a valid legal method for such transfers. There are two main ways organizations can transfer data internationally:

  • with the specific and express consent of the individual, which must be prior and separated from the other purposes and requisitions of consent;
  • through contractual instruments such as binding corporate rules and standard clauses, committing the organization to comply with the LGPD principles, individual rights, and the Brazilian data protection regime.

Governance & Oversight

In addition to the requirements above, under the LGPD, organizations must, in most circumstances:

  • Appoint an officer to “be in charge of the processing of data,” who, together with the organization, shall be jointly liable for remedying any damage, whether individually or collectively, in violation of the personal data protection legislation, caused by them (there is little specificity around the role or responsibility of the data processing officer; however, it is not mandatory for the officer to be located in Brazil);
  • Maintain a record of their processing activities;
  • Perform data protection impact assessments;
  • Design their products and services with privacy as a default;
  • Adopt security, technical, and administrative measures able to protect personal data from unauthorized access, as well as accidental or unlawful destruction, loss, alteration, communication (likely similar standards to those established under the Brazilian Internet Act); and
  • Notify government authorities and individuals in the case of a data breach.

Meeting these requirements will likely be a significant administrative burden for organizations, especially as they work to meet varying documentation and governance requirements between the GDPR, CCPA, and LGPD. This effort is made more complicated by the lack of clarity in some of the LGPD administrative requirements. For example, while the LGPD requires a record of processing, it does not delineate what should be included in the document, and while it establishes that privacy impact assessments should be carried out, it does not indicate when such assessments are required.

Final Thoughts

Given August 2020 is right around the corner, global organizations processing personal data from or in Brazil should consider immediately moving forward with a review of their current data protection program to identify and address any LPGD compliance gaps that exist. As privacy law changes and global compliance requirements are top of mind for many clients operating global operations, we will be sure to provide timely informational updates on the LGPD, and any ANPD guidance issued.

Greenberg Traurig is not licensed to practice law in Brazil and does not advise on Brazilian law. Specific LGPD questions and Brazilian legal compliance issues will be referred to lawyers licensed to practice law in Brazil.


©2020 Greenberg Traurig, LLP. All rights reserved.

For more privacy laws around the globe, see the National Law Review Communications, Media & Internet law section.

D.C. District Court Limits the HIPAA Privacy Rule Requirement for Covered Entities to Provide Access to Records

On January 23, 2020, the D.C. District Court narrowed an individual’s right to request that HIPAA covered entities furnish the individual’s own protected health information (“PHI”) to a third party at the individuals’ request, and removed the cap on the fee covered entities may charge to transmit that PHI to a third party.

Specifically the Court stated that individuals may only direct PHI in an electronic format to such third parties, and that HIPAA covered entities, and their business associates, are not subject to reasonable, and cost-based fees for PHI directed to third parties.

The HIPAA Privacy Rule grants individuals with rights to access their PHI in a designated record set, and it specifies the data formats and permissible fees that HIPAA covered entities (and their business associates) may charge for such production. See 45 C.F.R. § 164.524. When individuals request copies of their own PHI, the Privacy Rule permits a HIPAA covered entity (or its business associate) to charge a reasonable, cost-based fee, that excludes, for example, search and retrieval costs. See 45 C.F.R. § 164.524(c) (4). But, when an individual requests his or her own PHI to be sent to a third party, both the required format of that data (electronic or otherwise) and the fees that a covered entity may charge for that service have been the subject of additional OCR guidance over the years—guidance that the D.C. District Court has now, in part, vacated.

The Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act set a statutory cap on the fee that a covered entity may charge an individual for delivering records in an electronic form. 42 U.S.C. § 17935(e)(3). Then, in the 2013 Omnibus Rule, developed pursuant to Administrative Procedure Act rulemaking, the Department of Health and Human Services, Office for Civil Rights (“HHS OCR”) implemented the HITECH Act statutory fee cap in two ways. First, OCR determined that the fee cap applied regardless of the format of the PHI—electronic or otherwise. Second, OCR stated the fee cap also applied if the individual requested that a third party receive the PHI. 78 Fed. Reg. 5566, 5631 (Jan. 25, 2013). Finally, in its 2016 Guidance document on individual access rights, OCR provided additional information regarding these provisions of the HIPAA Privacy Rule. OCR’s FAQ on this topic is available here.

The D.C. District Court struck down OCR’s 2013 and 2016 implementation of the HITECH Act, in part. Specifically, OCR’s 2013 HIPAA Omnibus Final Rule compelling delivery of protected health information (PHI) to third parties regardless of the records’ format is arbitrary and capricious insofar as it goes beyond the statutory requirements set by Congress. That statute requires only that covered entities, upon an individual’s request, transmit PHI to a third party in electronic form. Additionally, OCR’s broadening of the fee limitation under 45 C.F.R. § 164.524(c)(4) in the 2016 Guidance document titled “Individuals’ Right under HIPAA to Access their Health Information 45 C.F.R. Sec. 164.524” violates the APA, because HHS did not follow the requisite notice and comment procedure.” Ciox Health, LLC v. Azar, et al., No. 18-cv0040 (D.D.C. January 23, 2020).

All other requirements for patient access remain the same, including required time frames for the provision of access to individuals, and to third parties designated by such individuals. It remains to be seen, however, how HHS will move forward after these developments from a litigation perspective and how this decision will affect other HHS priorities, such as interoperability and information blocking.


© Polsinelli PC, Polsinelli LLP in California

For more on HIPAA Regulation, see the National Law Review Health Law & Managed Care section.

How Law Firms Can Prevent Phishing and Malware

Law firms harbor information directly linked to politics, public figures, intellectual property, and sensitive personal information. Because lawyers rely on email to manage cases and interact with clients, hackers exploit technical vulnerabilities and people via email. After cybercriminals infiltrate a law firm’s systems in a successful phishing or malware attack, they leverage breached information for financial gain.

Starting with email, law firms must control the availability, confidentiality, and integrity of data. Or they will suffer breaches that bring increased insurance premiums, loss of intellectual property, lost contract revenue, and reputational damage.

Law firms aren’t securing their cloud technology

As lawyers adapt with best practices in technology, they’re moving client data and confidential documents from on-premise to cloud-hosted databases. 58% of firms use cloud technology to manage their clients and run their firms, according to the 2019 Legal Technology Survey Report on Cybersecurity and Cloud Computing from The American Bar Association’s Legal Technology Resource Center.

Migrating data to the cloud is a good thing, despite concerns about its availability. Data is more secure when stored in a system with modern infrastructure and security protocols, instead of stored locally on an outdated system no longer supported by vendors — such as a desktop device still running Windows 7 software, rather than Windows 10.

Even though the cloud is safe, law firms inevitably fall victim to cloud-based cyberattacks like phishing and malware.

26% of lawyers reported a security breach at their firm. TECHREPORT’s other findings explain why the breach rate is so high:

  • Fewer than half (41%) of all respondents changed their security practices after migrating to the cloud.

  • Only 35% of lawyers adopt more than one standard security measure — like encryption, anti-malware, anti-phishing, and network security.

  • 14% of respondents using cloud-based technology to manage their firm do not have any preventative security measures in place.

Changes to your firm's security policies.

Source: 2019 ABA TECHREPORT

How law firms can prevent phishing and malware

Lawyers know data breaches create downtime, loss of billable hours, and reputational harm. But they’re less aware of how to prevent those outcomes.

Phishing explained

Phishing happens via email, when hackers impersonate trusted senders to trick recipients into divulging sensitive or confidential information. Most often, phishers trick victims to click a malicious URL and interact with spoofed login pages. Microsoft is the most spoofed brand in the world, because it is the hub for organizations to collaborate and exchange information. If a lawyer enters their Office 365 credentials onto a spoofed login page, the username and password go directly to the hacker’s server.

Most common brands in phishing attacks.

Source: TechRadar

Successful credential-harvesting phishing attacks allow hackers to access data-dense services like Office 365, online banking, and practice management software. Stolen credentials lead to account takeover scenarios that result in further exploits, including network infiltration, database infiltration, and data exfiltration.

3 common characteristics of phishing attacks

  1. Subject lines that appear highly urgent

Many subject lines in phishing emails are in all-caps to pressure the recipient. Beware of subject lines that say “URGENT” or “Are you available?” An infographic from cybersecurity firm KnowBe4 reveals the top phishing email subject lines from 2019.

Top-clicked phishing tests.

Source: KnowBe4

  1. Spelling errors, grammar errors, and awkward language

Hackers need to deceive language parsing technology like Optical Character Recognition (OCR) that identifies suspicious content and blocks the message. To bypass anti-phishing algorithms, they’ll intentionally misspell words, use special characters that look like letters, and replace letters with lookalike numbers. Phishing URLs are often misspelled, or the domain name does not match the content of the page. Carefully read every URL to see if the words and letters match the content of the page.

  1. Unexpected or unusual requests for documents or money.

Phishers can spoof the sender name and domain of trusted contacts’ email addresses to lull recipients into a false sense of trust and compliance. Requests for sensitive information (bank routing numbers, trust account numbers, login credentials, document access, etc.) should be confirmed over the phone or any other communication channel besides that same email thread.

6 ways to prevent phishing at your law firm

  1. Check if email addresses associated with the firm were involved in high-profile breaches

Have I Been Pwned is a website that identifies compromised email addresses and passwords across online services that have been breached so that victims can change their password and prevent account access. Set up alerts through the website to monitor any future breaches.

 Check if you have an account that has been compromised in a data breach.

Source: HaveIBeenPwned.com

  1. Install password managers

The best passwords don’t need to be memorized. 25% of people reuse the same password for everything, according to OpenVPN. Password manager services like 1Password (paid) and LastPass (free) use browser plug-ins and mobile applications to create, remember, and autofill complex, randomly-generated passwords. They identify weak or reused passwords across websites, and run a program to simultaneously rewrite and save new passwords on those sites.

LastPass password management software

Source: LastPass.com

  1. Make Multi-Factor authentication (MFA) mandatory at the firm

Multi-factor authentication, a secure login method using two or more pieces of confirmation, adds another step to the login process to prevent account takeover and the breach of confidential data. When username and password credentials are submitted to the login page, MFA generates and sends a unique alphanumeric code to the account holder’s email or phone for use as a secondary password. Unless this code is submitted on the follow-up login screen in a timely manner, it will expire.

Because email accounts and cell phone numbers are publicly available and can be compromised, use app-based and hardware-based MFA instead.

Solo and small/medium firms should use the Google Authenticator app, which continuously creates dynamic codes that swap out every 30 seconds and are unique to the device on which the app was installed.

Larger firms should adopt physical MFA. These “keys” plug into your laptop, tablet, or mobile device ports to authenticate access to software — and even the device itself. Because the keys are unique, hackers can’t access accounts supported by hardware MFA keys like Yubico’s YubiKey, which is used by every Google employee. If the key is lost, account access can be gained through backup codes or MFA codes delivered via email, mobile, or authentication apps.

Make Multi-Factor authentication mandatory at the law firm.

YubiKeys (Source: Wired Store)

  1. Participate in phishing awareness training programs

These software programs regularly educate and train employees on the characteristics of spam, phishing, malware, ransomware, and social engineering attack methods. Microsoft’s Attack Simulator and KnowBe4 offer free programs that train users not to interact with phishing attempts and give visibility into how well they’re trained, based on their click rate during the attack simulations. The 2019 Verizon Data Breach Investigation Report found that lawyers and other professional service workers were the third most likely group to click on phishing emails.

2019 Verizon Data Breach Investigation Report

Source: 2019 Verizon Data Breach Investigation Report, Figure 45

  1. Only connect to secure WiFi

Connecting to public WiFi in a cafe, airport, or hotel is dangerous. Malicious worms can transfer from one device to another if they are connected on the same network. When traveling, use a virtual private network (VPN) to extend a remote private network across the public network and secure the WiFi connection.

  1. Report suspicious emails

Popular email clients like Office 365 and Google Gmail offer suspicious message reporting. Use this built-in tool to improve their anti-phishing algorithm. If applicable, contact the IT team or cybersecurity staff at the firm so they can update security configurations in the email client or third-party security tool they may use.

What is malware?

Malware is any malicious file that launches scripts to hijack a device, steal confidential data, or launch a Distributed Denial of Service (DDoS) attack. Most malware is delivered via email. The 2019 Verizon Data Breach Investigation Report found that 51% of phishing attacks involve malware injections into a network. These malicious scripts are usually injected via spoofed DocuSign and Adobe attachments, or fraudulent billing and invoicing documents.

Ransomware is a subset of malware that hackers use to hold information or access hostage until a ransom is paid. Ransomware exploits frequently involve blackmailing tactics, and “sextortion” phishing emails (in which hackers purport to have footage of the victim watching pornography) are gaining popularity.

The 2019 ABA TECHREPORT noted that 36% of firms have had systems infected, and about a quarter (26%) of firms were unaware if they’ve been infected by malware. Larger firms, which tend to use on-premise software because of the up-front work associated with cloud migration, are the least likely to know if they’ve suffered a malware attack.

3 ways to prevent malware

  1. Monitor and update outdated software and hardware 

Application updates are necessary and should not be treated as optional. These software upgrades implement essential security features to ward off new strains of attacks. Not updating software and hardware provides short term savings, but will be very costly in the long run.

Be aware that:

  • Windows 7 is no longer supported since January 2020.

  • MS Office 2010 will no longer be supported as of October 2020.

  • Support for Adobe Acrobat X Reader/Standard/Pro, Adobe Acrobat XI, and Reader XI has ended. 88% of attorneys continue to use these highly-vulnerable Adobe programs, according to the 2019 ABA TECHREPORT.

  1. Monitor email for links and executables (including macro-enabled Office docs)

Executable files automatically launch actions, based on the code in the file. Apply software restrictions on your device to prevent executable files from starting up without your consent. Microsoft found that 98% of Office-targeted threats use macros. In 2016, Microsoft pushed a macro-blocking feature in Word to prevent malware infection.

Block macros and prevent malware in Microsoft Office Word.

Source: Microsoft Security Blog

  1. Hire a Managed Service Provider (MSP) for cybersecurity

MSPs offer an affordable portfolio of solutions to manage cyber risk across firm operations.

The solution: control the login process and data access in cloud-based apps

Lawyers are obligated to protect sensitive client information from phishing, malware, and ransomware. As breaches continue to make headlines, clients are selecting firms based on their data security. Law firms educated on confidentiality, security, and data control will be able to reassure security-conscious clients.

Cloud security — especially in email and document storage — relies on identity and access management. Establish a secure login process, govern user privileges in applications, and ensure that everyone at the firm can spot suspicious emails and attachments.

Choose cloud providers with a reputation for secure software and identify third-party security vendors for anti-phishing, anti-malware, and MFA.


© Copyright 2020 PracticePanther

Written by Reece Guida of PracticePanther.
For more on cybersecurity for legal and other businesses, see the National Law Review Communications, Media & Internet law section.

The Shell Game Played with Your DNA, or 23 and Screwing Me

So you want to know how much Neanderthal is in your genes.

You are curious about the percentage of Serbo-Croatian, Hmong, Sephardim or Ashanti blood that runs through your veins. Or maybe you hope to find a rich great-aunt near death, seeking an heir.

How much is this worth to you?  Two hundred bucks? That makes sense.

But what about other costs:

– like sending your cousin to prison for life (and discovering that you grew up with a serial killer)?

– like all major companies refusing to insure you due to your genetic make-up?

— like ruining your family holidays when you find that your grandfather is not really genetically linked to you and grandma had been playing the field?

– like pharma companies making millions using your genetic code to create new drugs and not crediting you at all (not even with discounts on the drugs created by testing your cells)?

– like finding that your “de-identified” genetic code has been re-identified on the internet, exposing genetic propensity for alcoholism or birth defects that turn your fiancé’s parents against you?

How much are these costs worth to you?

According to former FDA commissioner Peter Pitts, writing in Forbes, “The [private DNA testing] industry’s rapid growth rests on a dangerous delusion that genetic data is kept private. Most people assume this sensitive information simply sits in a secure database, protected from hacks and misuse. Far from it. Genetic-testing companies cannot guarantee privacy. And many are actively selling user data to outside parties.” Including law enforcement.

Nothing in US Federal health law protects the privacy of DNA test subjects at “non-therapeutic” labs like Ancestry or 23andMe. Information gleaned from the DNA can be used for almost anything.  As Pitts said, “Imagine a political campaign exposing a rival’s elevated risk of Alzheimer’s. Or an employer refusing to hire someone because autism runs in her family. Imagine a world where people can have their genomic building blocks held against them. Such abuses represent a profound violation of privacy. That’s an inherent risk in current genetic-testing practices.”

Genetic testing companies quietly, and some would argue without adequate explanation of facts and harms which are lost in a thousand words of fine print that most data subjects won’t read, push their customers to allow genetic testing on the customer samples provided. Up to 80% of 23andMe customers consent to this activity, likely not knowing that the company plans to make money off the drugs developed from customer DNA. Federal laws require labs like those used by 23andMe for drug development to keep information for more than 10 years, so once they have it, despite rights to erasure provided by California and the EU, 23andMe can refuse to drop your data from its tests.

Go see the HBO movie starring Oprah Winfrey about medical exploitation of the cell lines of Henrietta Lacks, or better yet, read the bestselling book it was based on. Observe that an engaging, vivacious woman who couldn’t afford health insurance was farmed for a line of her cancer cells that assisted medical science for decades and made millions of dollars for pharma companies without any permission from or benefit to the woman whose cells were taken.  Or any benefit to her family once cancer killed her. Companies secured over 11,000 patents using her cell lines. This is the business model now adopted by 23andMe. Take your valuable data under the guise of providing information to you, but quietly turning that data into profitable products for their shareholders’ and executives’ benefit. Not to mention that 23andMe can change its policies at any time.

As part of selling your genetic secrets to the highest bidder, 23andMe is constantly pushing surveys out to its customers. According to an article in Wired, 23andMe Founder Ann Wojcicki said, “We specialize in capturing phenotypic data on people longitudinally—on average 300 data points on each customer. That’s the most valuable by far.” Which means they are selling not only your DNA information, but all the other data you give them about your family and lifestyle.

This deep ethical compromise by 23andMe is personal for me, and not because I have sent them any DNA samples – I haven’t and I never would. But because, when questioned publicly about their trustworthiness by me and others close to me, 23andMe has not tried to explain its policies, but has simply attacked the questioners in public. Methinks the amoral vultures doth protest too much.

For example, a couple of years ago, my friend, co-author and privacy expert Theresa Payton noted on a Fox News segment that people who provide DNA information to 23andMe do not know how such data will be used because the industry is not regulated and the company could change its policies any time. 23andMe was prompt and nasty in its response, attacking Ms. Payton on Twitter and probably elsewhere, claiming that the 23andMe privacy policy, as it existed at the time, was proof that no surprises could ever be in store for naïve consumers who gave their most intimate secrets to this company.

[BTW, for the inevitable attacks coming from 23andMe and their army of online protectors, the FTC endorsement guidelines require that if there is a material connection between you and 23andMe, paid or otherwise, you need to clearly and conspicuously disclose it.]

Clearly Ms. Payton was correct and 23andMe’s attacks on her were simply wrong.

Guess what? According to the Wall Street Journal, 23andMe sold a $300 MM stake in itself to GlaxoSmithKline recently and, “For 23andMe, using genetic data for drug research ‘was always part of the vision,’ according to Emily Drabant Conley, vice president and head of business development.” So this sneaky path is not even a new tactic. According to the same WSJ story, “23andMe has long wanted to use genetic data for drug development. Initially, it shared its data with drug makers including Pfizer Inc. and Roche Holding AG ’s Genentech but wasn’t involved in subsequent drug discovery. It later set up its own research unit but found it lacked the scale required to build a pipeline of medicines. Its partnership with Glaxo is now accelerating those efforts.”

And now 23andMe has licensed an antibody it developed to treat inflammatory diseases to Spanish drug maker Almirall SA. “This is a seminal moment for 23andMe,” said Conley. “We’ve now gone from database to discovery to developing a drug.” In the WSJ, Arthur Caplan, a professor of bioethics at NYU School of Medicine said “You get this gigantic valuable treasure chest, and people are going to wind up paying for it twice. All the people who sent in DNA will be paying the same price for any drugs that are developed as anybody else.”

So this adds another ironic dimension to the old television adage, “You aren’t the customer, you are the product.” You pay to provide your DNA – the code to your entire physical existence – to a private company. Why? Possibly because you want information that may affect your healthcare, but in all likelihood you simply intend to use the information for general entertainment and information purposes.

You likely send a swab to the DNA company because you want to learn your ethnic heritage and/or see what interesting things they can tell you about why you have a photic sneeze reflex, if you are genetically inclined to react strongly to caffeine, or if you are carrier of a loathsome disease (which you could learn for an additional fee). But the company uses the physical cells from your body not only to build databases of commercially valuable information, but to develop drugs and sell them to the pharmaceutical industry. So who is the DNA company’s customer? 23andMe and its competitors take physical specimens from you and sell products made from those specimens to their real customers, the drug companies and the data aggregators.

These DNA processing firms may be the tip of the spear, because huge data companies are coming for your health information. According to the Wall Street Journal,

“Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.”

And medical companies are now tracking patient information with wearables like smartwatches, so that personally captured daily health data is now making its way into these databases.

And, of course, other risk issues affect the people who provide data to such services.  We know through reporting following the capture of the Golden State Killer that certain genetic testing labs (like GEDMatch) have been more free than others with sharing customer DNA with law enforcement without asking for warrants, subpoenas or court orders, and that such data can not only implicate the DNA contributors but their entire families as well. In addition, while DNA testing companies claim to only sell anonymized data, the information may not remain that way.

Linda Avey, co-founder of 23andMe, concedes that nothing is foolproof. She told an online magazine, “It’s a fallacy to think that genomic data can be fully anonymized.” This articles showed that researchers have already re-identified people from their publicly available genomic data. For example, one 2013 study matched Y-chromosome data with names posted in places such as genealogy sites. In another study that same year, Harvard Professor Latanya Sweeney re-identified 84 to 97 percent of a sample of Personal Genome Project volunteers by comparing gender, postal code and date of birth with public records.

2015 study re-identified nearly a quarter of a sample of users sequenced by 23andMe who had posted their information to the sharing site openSNP. “The matching risk will continuously increase with the progress of genomic knowledge, which raises serious questions about the genomic privacy of participants in genomic datasets,” concludes the paper in Proceedings on Privacy Enhancing Technologies. “We should also recall that, once an individual’s genomic data is identified, the genomic privacy of all his close family members is also potentially threatened.” DNA data is the ultimate genie, that once released from the bottle, can’t be changed, shielded or stuffed back inside, and that threatens both the data subject and her entire family for generations.

And let us not forget the most basic risk involved in gathering important data. This article has focused on how 23andMe and other private DNA companies have chosen to use the data – probably in ways that their DNA contributing customers did not truly understand – to turn a profit for investors.  But collecting such data could have unintended consequences.  It can be lost to hackers, spies or others who might steal it for their own purposes.  It can be exposed in government investigations through subpoenas or court orders that a company is incapable of resisting.

So people planning to plaster their deepest internal and family secrets into private company databases should consider the risks that the private DNA mills don’t want you to think about.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in health data privacy, see the National Law Review Health Law & Managed Care section.

My Business Is In Arizona, Why Do I Care About California Privacy Laws? How the CCPA Impacts Arizona Businesses

Arizona businesses are not typically concerned about complying with the newest California laws going into effect. However, one California law in particular—the CCPA or California Consumer Privacy Act—has a scope that extends far beyond California’s border with Arizona. Indeed, businesses all over the world that have customers or operations in California must now be mindful of whether the CCPA applies to them and, if so, whether they are in compliance.

What is the CCPA?

The CCPA is a comprehensive data privacy regulation enacted by the California Legislature that became effective on January 1, 2020. It was passed on September 13, 2018 and has undergone a series of substantive amendments over the past year and a few months.

Generally, the CCPA gives California consumers a series of rights with respect to how companies acquire, store, use, and sell their personal data. The CCPA’s combination of mandatory disclosures and notices, rights of access, rights of deletion, statutory fines, and threat of civil lawsuits is a significant move towards empowering consumers to control their personal data.

Many California businesses are scrambling to implement the necessary policies and procedures to comply with the CCPA in 2020. In fact, you may have begun to notice privacy notices on the primary landing page for national businesses. However, Arizona businesses cannot assume that the CCPA stops at the Arizona border.

Does the CCPA apply to my business in Arizona?

The CCPA has specific criteria for whether a company is considered a California business. The CCPA applies to for-profit businesses “doing business in the State of California” that also:

  • Have annual gross revenues in excess of twenty-five million dollars; or
  • Handle data of more than 50,000 California consumers or devices per year; or
  • Have 50% or more of revenue generated by selling California consumers’ personal information

The CCPA does not include an express definition of what it means to be “doing business” in California. While it will take courts some time to interpret the scope of the CCPA, any business with significant sales, employees, property, or operations in California should consider whether the CCPA might apply to them.

How do I know if I am collecting a consumer’s personal information?

“Personal information” under the CCPA generally includes any information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked” with a specific consumer. As the legalese of this definition implies, “personal information” includes a wide swath of data that your company may already be collecting about consumers.

There is no doubt that personal identifiers like name, address, email addresses, social security numbers, etc. are personal information. But information like biometric data, search and browsing activity, IP addresses, purchase history, and professional or employment-related information are all expressly included under the CCPA’s definition. Moreover, the broad nature of the CCPA means that other categories of data collected—although not expressly identified by the CCPA—may be deemed to be “personal information” in an enforcement action.

What can I do to comply with the CCPA?

If the CCPA might apply to your company, now is the time to take action. Compliance will necessarily be different for each business depending on the nature of its operation and the use(s) of personal information. However, there are some common steps that each company can take.

The first step towards compliance with the CCPA is understanding what data your company collects, how it is stored, whether it is transferred or sold, and whether any vendors or subsidiaries also have access to the data. Next, an organization should prepare a privacy notice that complies with the CCPA to post on its website and include in its app interface.

The most substantial step in complying with the CCPA is to develop and implement policies and procedures that help the company conform to the various provisions of the CCPA. The policies will need to provide up-front disclosures to consumers, allow consumers to opt-out, handle consumer requests to produce or delete personal information, and guard against any perceived discrimination against consumers that exercise rights under the CCPA.

The company will also need to review contracts with third-party service providers and vendors to ensure it can comply with the CCPA. For example, if a third-party cloud service will be storing personal information, the company will want to verify that its contract allows it to assemble and produce that information within statutory deadlines if requested by a consumer.

At least you have some time!

The good news is that the CCPA includes a grace period until July 1, 2020 before the California Attorney General can bring enforcement actions. Thus, Arizona businesses that may have ignored the quirky California privacy law to this point have a window to bring their operations into compliance. However, Arizona companies that may need to comply with the CCPA should consult with counsel as soon as possible to begin the process. The attorneys at Ryley Carlock & Applewhite are ready to help you analyze your risk and comply with the CCPA.


Copyright © 2020 Ryley Carlock & Applewhite. A Professional Association. All Rights Reserved.

Learn more about the California Consumer Privacy Act (CCPA) on the National Law Review Communications, Media & Internet Law page.

Florida’s Legislature to Consider Consumer Data Privacy Bill Akin to California’s CCPA

Florida lawmakers have proposed data privacy legislation that, if adopted, would impose significant new obligations on companies offering a website or online service to Florida residents, including allowing consumers to “opt out” of the sale of their personal information. While the bill (SB 1670 and HB 963) does not go as far as did the recent California Consumer Privacy Act, its adoption would mark a significant increase in Florida residents’ privacy rights. Companies that have an online presence in Florida should study the proposed legislation carefully. Our initial take on the proposed legislation appears below.

The proposed legislation requires an “operator” of a website or online service to provide consumers with (i) a “notice” regarding the personal information collected from consumers on the operator’s website or through the service and (ii) an opportunity to “opt out” of the sale of certain of a consumer’s personal information, known as “covered information” in the draft statute.

The “notice” would need to include several items. Most importantly, the operator would have to disclose “the categories of covered information that the operator collects through its website or online service about consumers who use [them] … and the categories of third parties with whom the operator may share such covered information.” The notice would also have to disclose “a description of the process, if applicable, for a consumer who uses or visits the website or online service to review and request changes to any of his or her covered information. . . .” The bill does not otherwise list when this “process” would be “applicable,” and it nowhere else appears to create for consumers any right to review and request changes.

While the draft legislation obligates operators to stop selling data of a consumer who submits a verified request to do so, it does not appear to require a description of those rights in the “notice.” That may just be an oversight in drafting. In any event, the bill is notable as it would be the first Florida law to require an online privacy notice. Further, a “sale” is defined as an exchange of covered information “for monetary consideration,” which is narrower than its CCPA counterpart, and contains exceptions for disclosures to an entity that merely processes information for the operator.

There are also significant questions about which entities would be subject to the proposed law. An “operator” is defined as a person who owns or operates a website or online service for commercial purposes, collects and maintains covered information from Florida residents, and purposefully directs activities toward the state. That “and” is assumed, as the proposed bill does not state whether those three requirements are conjunctive or disjunctive.

Excluded from the definition of “operator” is a financial institution (such as a bank or insurance company) already subject to the Gramm-Leach-Bliley Act, and an entity subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Outside of the definition of “operator,” the proposed legislation appears to further restrict the companies to which it would apply, to eliminate its application to smaller companies based in Florida, described as entities “located in this state,” whose “revenue is derived primarily from a source other than the sale or lease of goods, services, or credit on websites or online services,” and “whose website or online service has fewer than 20,000 unique visitors per year.” Again, that “and” is assumed as the bill does not specify “and” or “or.”

Lastly, the Department of Legal Affairs appears to be vested with authority to enforce the law. The proposed legislation states explicitly that it does not create a private right of action, although it also says that it is in addition to any other remedies provided by law.

The proposed legislation is part of an anticipated wave of privacy legislation under consideration across the country. California’s CCPA took effect in January and imposes significant obligations on covered businesses. Last year, Nevada passed privacy legislation that bears a striking resemblance to the proposed Florida legislation. Other privacy legislation has been proposed in Massachusetts and other jurisdictions.


©2011-2020 Carlton Fields, P.A.

For more on new and developing legislation in Florida and elsewhere, see the National Law Review Election Law & Legislative News section.

2020 Predictions for Data Businesses

It’s a new year, a new decade, and a new experience for me writing for the HeyDataData blog.  My colleagues asked for input and discussion around 2020 predictions for technology and data protection.  Dom has already written about a few.  I’ve picked out four:

  1. Experiential retail

Stores will offer technology-infused shopping experience in their stores.  Even today, without using my phone, I can experience a retailer’s products and services with store-provided technology, without needing to open an app.  I can try on a pair of glasses or wear a new lipstick color just by putting my face in front of a screen.  We will see how creative companies can be in luring us to the store by offering us an experience that we have to try.  This experiential retail type of technology is a bit ahead of the Amazon checkout technology, but passive payment methods are coming, too.  [But if we still don’t want to go to the store, companies will continue to offer us more mobile ordering—for pick-up or delivery.]

  1. Consumers will still tell companies their birthdays and provide emails for coupons (well, maybe not in California)

We will see whether the California Consumer Privacy Act (CCPA) will meaningfully change consumers’ perception about giving their information to companies—usually lured by financial incentives (like loyalty programs, coupons, etc. or a free app).  I tend to think that we will continue to download apps and give information if it is convenient or cheaper for us and that companies will think it is good for business (and their shareholders, if applicable) to continue to engage with their consumers.  This is an extension of number 1, really, because embedding technology in the retail experience will allow companies to offer new (hopefully better) products (and gather data they may find a use for later. . . ).  Even though I think consumers will still provide up their data, I also think consumer privacy advocates try harder to shift their perceptions (enter CCPA 2.0 and others).

  1. More “wearables” will hit the market

We already have “smart” refrigerators, watches, TVs, garage doors, vacuum cleaners, stationary bikes and treadmills.  Will we see other, traditionally disconnected items connect?  I think yes.  Clothes, shoes, purses, backpacks, and other “wearables” are coming.

  1. Computers will help with decisions

We will see more technology-aided (trained with lots of data) decision making.  Just yesterday, one of the most read stories described how an artificial intelligence system detected cancer matching or outperforming radiologists that looked at the same images.  Over the college football bowl season, I saw countless commercials for insurance companies showing how their policy holders can lower their rates if they let an app track how they are driving.  More applications will continue to pop-up.

Those are my predictions.  And I have one wish to go with it.  Those kinds of advances create tension among open innovation, ethics and the law.  I do not predict that we will solve this in 2020, but my #2020vision is that we will make progress.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on data use in retail & health & more, see the National Law Review Communications, Media & Internet law page.

CCPA Notice of Collection – Are You Collecting Geolocation Data, But Do Not Know It?

Businesses subject to the California Consumer Privacy Act (“CCPA”) are working diligently to comply with the CCPA’s numerous mandates, although final regulatory guidance has yet to be issued. Many of these businesses are learning that AB25, passed in October, requires employees, applicants, and certain other California residents to be provided a notice of collection at least for the next 12 months. These businesses need to think about what must be included in these notices.

Business Insider article explains that iPhones maintain a detailed list of every location the user of the phone frequents, including how long it took to get to that location, and how long the user stayed there. The article provides helpful information about where that information is stored on the phone, how the data can be deleted, and, perhaps more importantly, how to stop the tracking of that information. This information may be important for users, as well as companies that provide iPhones to their employees to use in connection with their work.

AB25 excepted natural persons acting as job applicants, employees, owners, directors, officers, medical staff members, and contractors of a CCPA-covered business from all of the CCPA protections except two: (i) providing them a notice of collection under Cal. Civ. Code Sec. 1798.100(b), and (ii) the right to bring a private civil action against a business in the event of a data breach caused by the business’s failure to maintain reasonable safeguards to protect personal information. The notice of collection must inform these persons as to the categories of personal information collected by the business and how those categories are used.

The CCPA’s definition of personal information includes eleven categories of personal information, one of which is geolocation data. As many businesses think about the categories of personal information they collect from employees, applicants, etc. for this purpose, geolocation may be the last thing that comes to mind. This is especially true for businesses with workforces that come into the office every day, and which do not have a business need to know where their employees are, such as transportation, logistics, and home health care businesses. But, they still may provide their workforce members a company-owned iPhone or other smart device with similar capabilities, although not realizing all of its capabilities or configurations.

As many who have gone through compliance with the General Data Protection Regulations in the European Union, the CCPA and other laws that may come after it in the U.S. will require businesses to think more carefully about the personal information they collect. They likely will find such information is being collected without their knowledge and not at their express direction, and they may have to communicate that collection (and use) to their employees.


Jackson Lewis P.C. © 2019

Facing Facts: Do We Sacrifice Security Out of Fear?

Long before the dawn of time, humans displayed physical characteristics as identification tools. Animals do the same to distinguish each other. Crows use facial recognition on humans.  Even plants can tell their siblings from unrelated plants of the same species.

We present our physical forms to the world, and different traits identify us to anyone who is paying attention. So why, now that identity theft is rampant and security is challenged, do we place limits on the easiest and best ID system available? Are we sacrificing future security due to fear of an unlikely dystopia?

In one of the latest cases rolling out of Illinois’ private right of action under the state’s Biometric Information Privacy Act (BIPA), Rogers v. BNSF Railway Company[1], the court ruled that a railroad hauling hazardous chemicals through major urban areas needed to change, and probably diminish, its security procedures for who it allows into restricted space. Why? Because the railroad used biometric security to identify authorized entrants, BIPA forces the railroad to receive the consent of each person authorized to enter restricted space, and because BIPA is not preempted by federal rail security regulations.

The court’s decision, based on the fact that federal rail security rules do not specifically regulate biometrics, is a reasonable reading of the law. However, with BIPA not providing exceptions for biometric security, BIPA will impede the adoption and effectiveness of biometric-based security systems, and force some businesses to settle for weaker security. This case illustrates how BIPA reduces security in our most vulnerable and dangerous places.

I can understand some of the reasons Illinois, Texas, Washington and others want to restrict the unchecked use of biometrics. Gathering physical traits – even public traits like faces and voices – into large searchable databases can lead to overreaching by businesses. The company holding the biometric database may run tests and make decisions based on physical properties.  If your voice shows signs of strain, maybe the price of your insurance should rise to cover risk that stress puts on your body. But this kind of concern can be addressed by regulating what can be done with biometric readings.

There are also some concerns that may not have the foundation they once had. Two decades ago, many biometric systems stored bio data as direct copies, so that if someone stole the file, that person would have your fingerprint, voiceprint or iris scan.  Now, nearly all of the better biometric systems store bio readings as algorithms that can’t be read by computers outside the system that took the sample. So some of the safety concerns are no longer valid.

I propose a more nuanced thinking about biometric readings. While requiring data subject consent is harmless in many situations, the consent regime is a problem for security systems that use biometric indications of identity. And these systems are generally the best for securing important spaces.  Despite what you see in the movies, 2019 biometric security systems can be nearly impossible to trick into false positive results. If we want to improve our security for critical infrastructure, we should be encouraging biometrics, not throwing hurdles in the path of people choosing to use it.

Illinois should, at the very least, provide an exception to BIPA for physical security systems, even if that exception is limited to critical facilities like nuclear, rail and hazardous shipping restricted spaces. The state can include limits on how the biometric samples are used by the companies taking them, so that only security needs are served.

The field of biometrics may scare some people, but it is a natural outgrowth of how humans have always told each other apart.  If limit its use for critical security, we are likely to suffer from the decision.

[1] 2019 WL 5699910 (N.D. Ill).


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on biometric identifier privacy, see the National Law Review Communications, Media & Internet law page.