Can We Really Forget?

I expected this post would turn out differently.

I had intended to commend the European Court of Justice for placing sensible limits on the extraterritorial enforcement of the EU’s Right to be Forgotten. They did, albeit in a limited way,[1] and it was a good decision. There.  I did it. In 154 words.

Now for the remaining 1400 or so words.

But reading the decision pushes me back into frustration at the entire Right to be Forgotten regime and its illogical and destructive basis. The fact that a court recognizes the clear fact that the EU cannot (generally) force foreign companies to violate the laws of their own countries in internet sites that are intended for use within those countries (and NOT the EU), does not come close to offsetting the logical, practical and societal problems with the way the EU perceives and enforces the Right to be Forgotten.

As a lawyer, with all decisions grounded in the U.S. Constitution, I am comfortable with the First Amendment’s protection of Freedom of Speech – that nearly any truthful utterance or publication is inviolate, and that the foundation of our political and social system depends on open exposure of facts to sunlight. Intentionally shoving those true facts into the dark is wrong in our system and openness will be protected by U.S. courts.

Believe it or not, the European Union also has such a concept at the core of its foundation too. Article 10 of the European Convention on Human Rights states that:

“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

So we have the same values, right? In both jurisdictions the right to impart information can be exercised without interference by public authority.  Not so fast.  The EU contains a litany of restrictions on this right, including a limitation of your right to free speech by the policy to protect the reputation of others.

This seems like a complete evisceration of a right to open communication if a court can force obfuscation of facts just to protect someone’s reputation.  Does this person deserve a bad reputation? Has he or she committed a crime, failed to pay his or her debts, harmed animals or children, stalked an ex-lover, or violated an oath of office, marriage, priesthood or citizenship? It doesn’t much matter in the EU. The right of that person to hide his/her bad or dangerous behavior outweighs both the allegedly fundamental right to freedom to impart true information AND the public’s right to protect itself from someone who has proven himself/herself to be a risk to the community.

So how does this tension play out over the internet? In the EU, it is law that Google and other search engines must remove links to true facts about any wrongdoer who feels his/her reputation may be tarnished by the discovery of the truth about that person’s behavior. Get into a bar fight?  Don’t worry, the EU will put the entire force of law behind your request to wipe that off your record. Stiff your painting contractors for tens of thousands of Euros despite their good performance? Don’t worry, the EU will make sure nobody can find out . Get fired, removed from office or defrocked for dishonesty? Don’t worry, the EU has your back.

And that undercutting of speech rights has now been codified in Article 17 of Regulation 2016/679, the Right to be Forgotten.

And how does this new decision affect the rule? In the past couple weeks, the Grand Chamber of the EU Court of Justice issued an opinion limiting the extraterritorial reach of the Right to be Forgotten. (Google vs CNIL, Case C‑507/17) The decision confirms that search engines must remove links to certain embarrassing instances of true reporting, but must only do so on the versions of the search engine that are intentionally servicing the EU, and not necessarily in versions of the search engines for non-EU jurisdictions.

The problems with appointing Google to be an extrajudicial magistrate enforcing vague EU-granted rights under a highly ambiguous set of standards and then fining them when you don’t like a decision you forced them to make, deserve a separate post.

Why did we even need this decision? Because the French data privacy protection agency, known as CNIL, fined Google for not removing presumably true data from non-EU search results concerning, as Reuters described, “a satirical photomontage of a female politician, an article referring to someone as a public relations officer of the Church of Scientology, the placing under investigation of a male politician and the conviction of someone for sexual assaults against minors.”  So, to be clear, while the official French agency believes it should enforce a right for people to obscure that they have been convicted of sexual assault against children from the whole world, the Grand Chamber of the European Court of Justice believes that the people convicted child sexual assault should be protected in their right to obscure these facts only from people in Europe. This is progress.

Of course, in the U.S., politicians and other public figures, under investigation or subject to satire or people convicted of sexual assault against children do not have a right to protect their reputations by forcing Google to remove links to public records or stories in news outlets. We believe both that society is better when facts are allowed to be reported and disseminated and that society is protected by reporting on formal allegations against public figures or criminal convictions of private ones.

I am glad that the EU Court of Justice is willing to restrict rules to remain within its jurisdiction where they openly conflict with the basic laws of other jurisdictions. The Court sensibly held,

“The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.[2] . . . [T]he operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.”

Any other decision would be wildly overreaching. Believe me, every country in the EU would be howling in protest if the US decided that its views of personal privacy must be enforced in Europe by European companies due to operations aimed only to affect Europe. It should work both ways. So this was a well-reasoned limitation.

But I just cannot bring myself to be complimentary of a regime that I find so repugnant – where nearly any bad action can be swept under the rug in the name of protecting a person’s reputation.

As I have written in books and articles in the past, government protection of personal privacy is crucial for the clean and correct operation of a democracy.  However, privacy is also the obvious refuge of scoundrels – people prefer to keep the bad things they do private. Who wouldn’t? But one can go overboard protecting this right, and it feels like the EU has institutionalized its leap overboard.

I would rather err on the side of sunshine, giving up some privacy in the service of revealing the truth, than err on the side of darkness, allowing bad deeds to be obscured so that those who commit them can maintain their reputations.  Clearly, the EU doesn’t agree with me.


[1] The Court, in this case, wrote, “The issues at stake therefore do not require that the provisions of Directive 95/46 be applied outside the territory of the European Union. That does not mean, however, that EU law can never require a search engine such as Google to take action at worldwide level. I do not exclude the possibility that there may be situations in which the interest of the European Union requires the application of the provisions of Directive 95/46 beyond the territory of the European Union; but in a situation such as that of the present case, there is no reason to apply the provisions of Directive 95/46 in such a way.”

[2] EU Court of Justice case C-136/17, which states, “While the data subject’s rights [to privacy] override, as a general rule, the freedom of information of internet users, that balance may, however, depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information. . . .”

 


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more EU’s GDPR enforcement, see the National Law Review Communications, Media & Internet law page.

Head Hacking: New Devices Gather Brainspray

For more than a decade I have been warning about the vulnerability of brainspray – the brain signals that can be captured from outside your head. In 2008, this article by Jeffery Goldberg demonstrated that an fMRI machine could easily interpret how a person felt about stimuli provided – which could be a boon to totalitarian governments testing for people’s true feelings about the government or its Dear Leader. Of course in 2008 the fMRI costs two million dollars and you must lie still inside it for a useful reading to emerge.

While fMRI mind reading and lie detection is not yet ready for the courtroom, its interpretations are improving all the time and mobile units are under consideration. And its wearable cousins, like iWatches and computerized head gear are reading changes from within your body, such as electrocardiogram, heart rate, blood pressure, respiration rate, blood oxygen saturation, blood glucose, skin perspiration, capnography, body temperature, motion evaluation, cardiac implantable devices and ambient parameters. Certain head gear is calibrated just for brain waves.

Some of this is gaming equipment and some helps you meditate.  Biofeedback headsets measure your brain waves, using EEG. They’re small bands that sit easily on your head and measure activity through sensors. Several companies like MindWave, NeuroSky, Thync, and Versus all make such equipment available to the general public.

Of course, if you really want to frighten yourself about how far this technology has advances, check in on DARPA and the rest of the US Military. DARPA has been testing brainwave filtering binoculars , human brainwave driven targeting for killer robots,  and soldier brain-machine interfaces for military vehicles. And these are just the things they are currently willing to dicuss in public.

I wrote six years ago about how big companies like Honda were exploring brainspray capture, and have spoken about how Google, Facebook and other Silicon Valley giants have sunk billions of dollars into creating brain-machine interfaces and reading brainspray for practical purposes.

I will write more on this later, but be aware that hacking of this equipment is always possible, which could give the wrong people access to your brain waves and pick up if you are thinking of your bank account PIN or other sensitive matter. Your thoughts of any sort should be protected from view.  Thought-crime has always been on the other side of the line.

Now that it is possible to read your brainspray with greater certainty, we should be considering how to regulate this activity.  I don’t mind giving the search engine my information in exchange of efficient immediate searches.  But I don’t want to open my head to companies or government.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in device hacking, see the Communications, Media & Internet law page on the National Law Review.

Ubers of the Future will Monitor Your Vital Signs

Uber has announced that it is considering developing self-driving cars that monitor passengers’ vital signs by asking the passengers how they feel during the ride, in order to provide a stress-free and satisfying trip. This concept was outlined in a patent filed by the company in July 2019. Uber envisions passengers connecting their own health-monitoring devices (e.g., smart watches, activity trackers, heart monitors, etc.) to the vehicle to measure the passenger’s reactions. The vehicle would then synthesize the information, along with other measurements that are taken by the car itself (e.g., thermometers, vehicle speed sensors, driving logs, infrared cameras, microphones, etc.). This type of biometric monitoring could potentially allow the vehicle to assess whether it might be going too fast, getting too close to another vehicle on the road, or applying the brakes too hard.  The goal is to use artificial intelligence to create a more ‘satisfying’ experience for the riders in the autonomous vehicle.

This proposed technology presents yet another way that ride-sharing companies such as Uber can collect more data from their passengers. Of course, passengers would have the choice about whether to use this feature, but this is another consideration for passengers in this data-driven industry.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more about self-driving cars, see the National Law Review Communications, Media & Internet law page.

Will Technology Return Shame to Our Society?

The sex police are out there on the streets
Make sure the pass laws are not broken

Undercover (of the Night)The Rolling Stones

So, now we know that browsing porn in “incognito” mode doesn’t prevent those sites from leaking your dirty data courtesy of the friendly folks at Google and Facebook.  93 per cent of porn sites leak user data to a third party. Of these, Google tracks about 74 per cent of the analyzed porn sites, while Oracle tracks nearly 24 per cent sites and Facebook tracks nearly 10 per cent porn sites.  Yet, despite such stats, 30 per cent of all internet traffic still relates to porn sites.

The hacker who perpetrated the enormous Capital One data beach outed herself by oversharing on GitHub.  Had she been able to keep her trap shut, we’d probably still not know that she was in our wallets.  Did she want to get caught, or was she simply unashamed of having stolen a Queen’s ransom worth of financial data?

Many have lamented that shame (along with irony, truth and proper grammar) is dead.  I disagree.  I think that shame has been on the outward leg of a boomerang trajectory fueled by technology and is accelerating on the return trip to whack us noobs in the back of our unsuspecting heads.

Technology has allowed us to do all sorts of stuff privately that we used to have to muster the gumption to do in public.  Buying Penthouse the old-fashioned way meant you had to brave the drugstore cashier, who could turn out to be a cheerleader at your high school or your Mom’s PTA friend.  Buying the Biggie Bag at Wendy’s meant enduring the disapproving stares of vegans buying salads and diet iced tea.  Let’s not even talk about ED medication or baldness cures.

All your petty vices and vanity purchases can now be indulged in the sanctity of your bedroom.  Or so you thought.  There is no free lunch, naked or otherwise, we are coming to find.  How will society respond?

Country music advises us to dance like no one is watching and to love like we’ll never get hurt. When we are alone, we can act closer to our baser instincts.  This is why privacy is protective of creativity and subversive behaviors, and why in societies without privacy, people’s behavior regresses toward the most socially acceptable responses.  As my partner Ted Claypoole wrote in Privacy in the Age of Big Data,

“We all behave differently when we know we are being watched and listened to, and the resulting change in behavior is simply a loss of freedom – the freedom to behave in a private and comfortable fashion; the freedom to allow the less socially -careful branches of our personalities to flower. Loss of privacy reduces the spectrum of choices we can make about the most important aspects of our lives.

By providing a broader range of choices, and by freeing our choices from immediate review and censure from society, privacy enables us to be creative and to make decisions about ourselves that are outside the mainstream. Privacy grants us the room to be as creative and thought-provoking as we want to be. British scholar and law dean Timothy Macklem succinctly argues that the “isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is a sponsor and guardian to the creative and the subversive.”

For the past two decades we have let down our guard, exercising our most subversive and embarrassing expressions of id in what we thought was a private space. Now we see that such privacy was likely an illusion, and we feel as if we’ve been somehow gas lighted into showing our noteworthy bad behavior in the disapproving public square.

Exposure of the Ashley Madison affair-seeking population should have taught us this lesson, but it seems that each generation needs to learn in its own way.

The nerds will, inevitably, figure out how to continue to work and play largely unobserved.  But what of the rest of us?  Will the pincer attack of the advancing surveillance state and the denizens of the Dark Web bring shame back as a countervailing force to govern our behavior?  Will the next decade be marked as the New Puritanism?

Dwight Lyman Moody, a predominant 19th century evangelist, author, and publisher, famously said, “Character is what you are in the dark.”  Through the night vision goggles of technology, more and more of your neighbors can see who you really are and there are very few of us who can bear that kind of scrutiny.  Maybe Mick Jagger had it right all the way back in 1983, when he advised “Curl up baby/Keep it all out of sight.”  Undercover of the night indeed.



Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.

Hush — They’re Listening to Us

Apple and Google have suspended their practice of reviewing recordings from users interacting with their voice assistant programs. Did you know this was happening to begin with?

These companies engaged in “grading,” a process where they review supposedly anonymized recordings of conversations people had with voice assistant program like Siri. A recent Guardian article revealed that these recordings were being passed on to service providers around the world to evaluate whether the voice assistant program was prompted intentionally, and the appropriateness of their responses to the questions users asked.

These recordings can include a user’s most private interactions and are vulnerable to being exposed. Google acknowledged “misconduct” regarding a leak of Dutch language conversation by one of its language experts contracted to refine its Google Assistant program.

Reports indicate around 1,000 conversations, captured by Google Assistant (available in Google Home smart speakers, Android devices and Chromebooks) being leaked to Belgian news outlet VRT NWS. Google audio snippets are not associated with particular user accounts as part of the review process, but some of those messages revealed sensitive information such as medical conditions and customer addresses.

Google will suspend using humans to review these recordings for at least three months, according to the Associated Press. This is yet another friendly reminder to Google Assistant users that they can turn off storing audio data to their Google account completely, or choose to auto-delete data after every three months or 18 months. Apple is also suspending grading and will review their process to improve their privacy practice.

Despite Google and Apple’s recent announcement, enforcement authorities are still looking to take action. German regulator, the Hamburg Commissioner for Data Protection and Freedom of Information, notified Google of their plan to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure.” Since the GDPR’s implementation, we haven’t seen this enforcement action utilized, but its impact is significant as it allows the enforcement authorities to halt data processing when there is “an urgent need to act in order to protect the rights and freedoms of data subjects.”

While Google allows users to opt out of some uses of their recordings; Apple has not provided users that ability other than by disabling Siri entirely. Neither privacy policy explicitly warned users of these recordings but do reserve the right to use the information collected to improve their services. Apple, however, disclosed that they will soon provide a software update to allow Siri users opt-out of participation in grading.

Since we’re talking about Google Assistant and Siri, we have to mention the third member of the voice assistant triumvirate, Amazon’s Alexa. Amazon employs temporary workers to transcribe the voice commands of its Alexa. Users can opt out of “Help[ing] Improve Amazon Services and Develop New Features” and allowing their voice recordings to be evaluated.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Control Freaks and Bond Villains

The hippy ethos that birthed early management of the internet is beginning to look quaint. Even as a military project, the core internet concept was a decentralized network of unlimited nodes that could reroute itself around danger and destruction. No one could control it because no one could truly manage it. And that was the primary feature, not a bug.

Well, not anymore.

I suppose it shouldn’t surprise us that the forces insisting on dominating their societies are generally opposed to an open internet where all information can be free. Dictators gonna dictate.

Beginning July 17, 2019, the government of Kazakhstan began intercepting all HTTPS internet traffic inside its borders. Local Kazakh ISPs must force their users to install a government-issued certificate into all devices to allow local government agents to decrypt users’ HTTPS traffic, examine its content, re-encrypt with a government certificate and send it on to its intended destination. This is the electronic equivalent of opening every envelope, photocopying the material inside, stuffing that material in a government envelope and (sometimes) sending it to the expected recipient. Except with web sites.

According to ZDNet, the Kazakh government, unsurprisingly, said the measure was “aimed at enhancing the protection of citizens, government bodies and private companies from hacker attacks, Internet fraudsters and other types of cyber threats.” As Robin Hood could have told you, the Sheriff’s actions taken to protect travelers and control brigands can easily result in government control of all traffic and information, especially when that was the plan all along. Security Boulevard reports that “Since Wednesday, all internet users in Kazakhstan have been redirected to a page instructing users to download and install the new certificate.

This is not the first time that Kazakhstan has attempted to force its citizens to install root certificate, and in 2015 the Kazakhs even applied with Mozilla to have Kazakh root certificate included in Firefox (Mozilla politely declined).

Despite creative technical solutions, we all know that Kazakhstan is not alone in restricting the internet access of its citizens. For one (gargantuan) example, China’s population of 800 million has deeply restricted internet access, and, according to the Washington Post, the Chinese citizenry can’t access Google, Facebook, YouTube or the New York Times, among many, many, many others. The Great Firewall of China, which involves legislation, government monitoring action, technology limitations and cooperation from internet and telecommunications companies. China recently clamped down on WhatsApp and VPNs, which had returned a modicum of control and privacy to the people. And China has taken these efforts two steps beyond nearly anyone else in the world by building a culture of investigation and shame, where its citizens could find their pictures on local billboard for boorish traffic or internet behavior, or in jail for questioning the ruling party on the internet. All this is well documented.

23 countries in Asia and 7 in Africa restrict torrents, pornography, political media and social media. The only two European nations that have the same restrictions are Turkey and Belarus. Politicians in the U.S. and Europe had hoped that the internet would serve as a force for freedom, knowledge and unlimited communications. Countries like Russia, Cuba and Nigeria also see the internet’s potential, but they prefer to throttle the net to choke off this potential threat to their one-party rule governments.

For these countries, there is no such thing as private. They think of privacy in context – you may keep thoughts or actions private from companies, but not the government. On the micro level, it reminds me of family dynamics –When your teenagers talk about privacy, they mean keeping information private from the adults in their lives, not friends, strangers, or even companies. Controlling governments sing the song of privacy, as long as information is not kept from them, it can be hidden from others.

The promise of Internet freedom is slipping further away from more people each year as dictators and real life versions of movie villains figure out how to use the technology for surveillance of everyday people and how to limit access to “dangerous” ideas of liberty. ICANN, the internet control organization set up by the U.S. two decades ago, has proven itself bloated and ineffective to protect the interests of private internet users.  In fact, it would be surprising if the current leaders of ICANN even felt that such protections were within its purview.

The internet is truly a global phenomenon, but it is managed at local levels, leaving certain populations vulnerable to spying and manipulation by their own governments. Those running the system seem to have resigned themselves to allowing national governments to greatly restrict the human rights of their own citizens.

A tool can be used in many different ways.  A hammer can help build a beautiful home or can be the implement of torture and murder. The internet can be a tool for freedom of thought and expression, where everyone has a publishing and communication platform.  Or it can be a tool for repression. We have come to accept more of the latter than I believed possible.

Post Script —

Also, after a harrowing last 2-5 years where freedom to speak on the internet (and social media) has exploded into horrible real-life consequences, large and small, even the most libertarian and laissez faire of First World residents is slapping the screen to find some way to moderate the flow of ignorance, evil, insanity, inanity and stupidity. This is the other side of the story and fodder for a different post.

And it is also probably time to run an updated discussion of ICANN and its role in internet management.  We heard a great deal about internet leadership in 2016, but not so much lately. Stay Tuned.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more global & domestic internet developments, see the National Law Review Communications, Medis & Intenet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.

Lessons in Becoming a Second Rate Intellectual Power – Through Privacy Regulation!

The EU’s endless regulation imposed on data usage has spooled over into academia, providing another lesson in kneecapping your own society by overregulating it. And they wonder why none of the big internet companies arose from the EU (or ever will). This time, the European data regulators seem to be doing everything they can to hamstring clinical trials and drive the research (and the resulting tens of billions of dollars of annual spend) outside the EU. That’s bad for pharma and biotech companies, but it’s also bad for universities that want to attract, retain, and teach top-notch talent.

The European Data Protection Board’s Opinion 3/2019 (the “Opinion”) fires an early and self-wounding shot in the coming war over the GDPR meaning and application of “informed consent.” The EU Board insists on defining “informed consent” in a manner that would cripple most serious health research on humans and human tissue that could have taken place in European hospitals and universities.

As discussed in a US law review article from Former Microsoft Chief Privacy Counsel Mike Hintz called Science and Privacy: Data Protection Laws and Their Impact on Research (14 Washington Journal of Law, Technology & Arts 103 (2019)), noted in a recent IAPP story from Hintz and Gary LaFever, both the strict interpretation of “informed consent” and the GDPR’s right to withdraw consent can both cripple serious clinical trials. Further, according to LaFever and Hintz, researchers have raised concerns that “requirements to obtain consent for accessing data for research purposes can lead to inadequate sample sizes, delays and other costs that can interfere with efforts to produce timely and useful research results.”

A clinical researcher must have a “legal basis” to use personal information, especially health information, in trials.  One of the primary legal basis options is simply gaining permission from the test subject for data use.  Only this is not so simple.

On its face, the GDPR requires clear affirmative consent for using personal data (including health data) to be “freely given, specific, informed and unambiguous.” The Opinion clarifies that nearly all operations of a clinical trial – start to finish – are considered regulated transactions involving use of personal information, and special “explicit consent” is required for use of health data. Explicit consent requirements are satisfied by written statements signed by the data subject.

That consent would need to include, among other things:

  • the purpose of each of the processing operations for which consent is sought,
  • what (type of) data will be collected and used, and
  • the existence of the right to withdraw consent.

The Opinion is clear that the EU Board authors believe the nature of clinical trials to be one of  an imbalance of power between the data subject and the sponsor of the trial, so that consent for use of personal data would likely be coercive and not “freely given.” This raises the specter that not only can the data subject pull out of trials at any time (or insist his/ her data be removed upon completion of the trial), but EU Privacy Regulators are likely to simply cancel the right to use personal health data because the signatures could not be freely given where the trial sponsor had an imbalance of power over the data subject. Imagine spending years and tens of millions of euros conducting clinical trials, only to have the results rendered meaningless because, suddenly, the trial participants are of an insufficient sample size.

Further, if the clinical trial operator does not get permission to use personal information for analytics, academic publication/presentation, or any other use of the trial results, then the trial operator cannot use the results in these manners. This means that either the trial sponsor insists on broad permissions to use clinical results for almost any purpose (which would raise the specter of coercive permissions), or the trial is hobbled by inability to use data in opportunities that might arise later. All in all, using subject permission as a basis for supporting legal use of personal data creates unnecessary problems for clinical trials.

That leaves the following legal bases for use of personal data in clinical trials:

  • a task carried out in the public interest under Article 6(1)(e) in conjunction with Article 9(2), (i) or (j) of the GDPR; or

  • the legitimate interests of the controller under Article 6(1)(f) in conjunction with Article 9(2) (j) of the GDPR;

Not every clinical trial will be able to establish it is being conducted in the public interest, especially where the trial doesn’t fall “within the mandate, missions and tasks vested in a public or private body by national law.”  Relying on this basis means that a trial could be challenged later as not supported by national law, and unless the researchers have legislators or regulators pass or promulgate a clear statement of support for the research, this basis is vulnerable to privacy regulators’ whims.

Further, as observed by Hintze and LaFever, relying on “the legal basis involves a balancing test between those legitimate interests pursued by the controller or by a third party and the risks to the interests or rights of the data subject.” So even the most controller-centric of legal supports can be reversed if the local privacy regulator feels that a legitimate use is outweighed by the interests of the data subject.  I suppose the case of Henrietta Lacks, if arising in the EU in the present day, would be a clear situation where a non-scientific regulator can squelch a clinical trial because the data subjects rights to privacy were considered more important than any trial using her genetic material.

So none of the “legal basis” options is either easy or guaranteed not to be reversed later, once millions in resources have been spent on the clinical trial. Further, as Hintze observes, “The GDPR also includes data minimization principles, including retention limitations which may be in tension with the idea that researchers need to gather and retain large volumes of data to conduct big data analytics tools and machine learning.” Meaning that privacy regulators could step in and decide that a clinician has been too ambitious in her use of personal data in violation of data minimization rules and shut down further use of data for scientific purposes.

The regulators emphasize that “appropriate safeguards” will help protect clinical trials from interference, but I read such promises in the inverse.  If a hacker gains access to data in a clinical trial, or if some of this data is accidentally emailed to the wrong people, or if one of the 50,000 lost laptops each day contains clinical research, then the regulators will pounce with both feet and attack the academic institution (rarely paragons of cutting edge data security) as demonstrating a lack of appropriate safeguards.  Recent staggeringly high fines against Marriott and British Airways demonstrate the presumption of the ICO, at least, that an entity suffering a hack or losing data some other way will be viciously punished.

If clinicians choosing where to set human trials knew about this all-encompassing privacy law and how it throws the very nature of their trials into suspicion and possible jeopardy, I can’t see why they would risk holding trials with residents of the European Economic Zone. The uncertainty and risk involved in the aggressively intrusive privacy regulators now having specific interest in clinical trials may drive important academic work overseas. If we see a data breach in a European university or an academic enforcement action based on the laws cited above, it will drive home the risks.

In that case, this particular European shot in the privacy wars is likely to end up pushing serious researchers out of Europe, to the detriment of academic and intellectual life in the Union.

Damaging friendly fire indeed.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Privacy Concerns Loom as Direct-to-Consumer Genetic Testing Industry Grows

The market for direct-to-consumer (“DTC”) genetic testing has increased dramatically over recent years as more people are using at-home DNA tests. The global market for this industry is projected to hit $2.5 billion by 2024.  Many consumers subscribe to DTC genetic testing because they can provide insights into genetic backgrounds and ancestry.  However, as more consumers’ genetic data becomes available and is shared, legal experts are growing concerned that safeguards implemented by U.S. companies are not enough to protect consumers from privacy risks.

Some states vary in the manner by which they regulate genetic testing.  According to the National Conference of State Legislatures, the majority of states have “taken steps to safeguard [genetic] information beyond the protections provided for other types of health information.”  Most states generally have restrictions on how certain parties can carry out particular actions without consent.  Rhode Island and Washington require that companies receive written authorization to disclose genetic information.  Alaska, Colorado, Florida, Georgia, and Louisiana have each defined genetic information as “personal property.”  Despite these safeguards, some of these laws still do not adequately address critical privacy and security issues relative to genomic data.

Many testing companies also share and sell genetic data to third parties – albeit in accordance with “take-it-or-leave-it” privacy policies.  This genetic data often contains highly sensitive information about a consumer’s identity and health, such as ancestry, personal traits, and disease propensity.

Further, despite promises made in privacy policies, companies cannot guarantee privacy or data protection.  While a large number of companies only share genetic data when given explicit consent from consumers, there are other companies that have less strict safeguards. In some cases, companies share genetic data on a “de-identified” basis.  However, concerns remain relative to the ability to effectively de-identify genetic data.  Therefore, even when a company agrees to only share de-identified data, privacy concerns may persist because an emerging consensus is that genetic data cannot truly be de-identified. For instance, some report that the existence of powerful computing algorithms accessible to Big Data analysts makes it very challenging to prevent data from being de-identified.

To complicate matters, patients have historically come to expect their health information will be protected because the Health Insurance Portability and Accountability Act (“HIPAA”) governs most patient information. Given patients’ expectations of privacy under HIPAA, many consumers assume that this information is maintained and stored securely.  Yet, HIPAA does not typically govern the activities of DTC genetic testing companies – leaving consumers to agree to privacy and security protections buried in click-through privacy policies.  To protect patient genetic privacy, the Federal Trade Commission (“FTC”) has recommended that consumers withhold purchasing a kit until they have scrutinized the company’s website and privacy practices regarding how genomic data is used, stored and disclosed.

Although the regulation of DTC genetic testing companies remains uncertain, it is increasingly evident that consumers expect robust privacy and security controls.  As such, even in the absence of clear privacy or security regulations, DTC genetic testing companies should consider implementing robust privacy and security programs to manage these risks.  Companies should also approach data sharing with caution.  For further guidance, companies in this space may want to review Privacy-Best-Practices-for-Consumer-Genetic-Testing-Services-FINAL issued by the Future of Privacy Forum in July 2018.  Further, the legal and regulatory privacy landscape is rapidly expanding and evolving such that DTC genetic testing companies and the consumers they serve should be watchful of changes to how genetic information may be collected, used and shared over time.

 

©2019 Epstein Becker & Green, P.C. All rights reserved.
This article written by Brian Hedgeman and Alaap B. Shah from Epstein Becker & Green, P.C.