China’s TikTok Facing Privacy & Security Scrutiny from U.S. Regulators, Lawmakers

Perhaps it is a welcome reprieve for Facebook, Google and YouTube. A competing video-sharing social media company based in China has drawn the attention of U.S. privacy officials and lawmakers, with a confidential investigation under way and public hearings taking place on Capitol Hill.

Reuters broke the story that the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) is conducting a national security review of the owners of TikTok, a social media video-sharing platform that claims a young but formidable U.S. audience of 26.5 million users. CFIUS is engaged in the context of TikTok owner ByteDance Technology Co.’s $1 billion acquisition of U.S. social media app Musical.ly two years ago, a deal ByteDance did not present to the agency for review.

Meanwhile, U.S. legislators are concerned about censorship of political content, such as coverage of protests in Hong Kong, and the location and security of personal data the company stores on U.S. citizens.

Sen. Josh Hawley (R-Mo.), Chairman of the Judiciary Committee’s Subcommittee on Crime and Terrorism, invited TikTok and others to testify in Washington this week for hearings titled “How Corporations and Big Tech Leave Our Data Exposed to Criminals, China, and Other Bad Actors.”

While TikTok did not send anyone to testify, the company’s recently appointed General Manager for North America and Australia Vanessa Pappas, formerly with YouTube, sent a letter indicating that it did not store data on U.S. citizens in China. She explained in an open letter on the TikTok website, which reads similarly to that reportedly sent to the subcommittee, that the company is very much aware of its privacy obligations and U.S. regulations and is taking a number of measures to address its obligations.

For nearly eight years Pappas served as Global Head of Creative Insights and before that Audience Development for YouTube. In late 2018 she was strategic advisor to ByteDance, and in January 2019 became TikTok’s U.S. General Manager. In July her territory expanded to North America and Australia. Selecting someone who played such a leadership position for YouTube, widely used and familiar to Americans, to lead U.S. operations may serve calm the nerves of U.S. regulators. But given U.S. tensions with China over trade, security and intellectual property, TikTok and Pappas have a way to go.

Some commentators think Facebook must enjoy watching TikTok getting its turn in the spotlight, especially since TikTok is a growing competitor to Facebook in the younger market. If just briefly, it may divert attention away from the attention being paid globally to the social media giant’s privacy and data collection practices, and the many fines.

It’s clear that TikTok has Facebook’s attention. TikTok, which allows users to create and share short videos with special effects, did a great deal of advertising on Facebook. The ads were clearly targeting the teen demographic and were apparently successful. CEO Mark Zuckerberg recently said in a speech that mentions of the Hong Kong protests were censored in TikTok feeds in China and to the United States, something TikTok denied. In a case of unfortunate timing, Zuckerberg this week posted that 100 or so software developers may have improperly accessed Facebook user data.

Since TikTok is largely a short-video sharing application, it competes at some level with YouTube in the youth market. In the third quarter of 2019, 81 percent of U.S. internet users aged 15 to 25 accessed YouTube, according to figures collected by Statista. YouTube boasts more than 126 million monthly active users in the U.S., 100 million more than TikTok.

Potential counterintelligence ‘we cannot ignore’

Last month, U.S. Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR) asked Acting Director of National Intelligence to conduct a national security probe of TikTok and other Chinese companies. Expressing concern about the collection of user data, whether the Chinese government censors content feeds to the U.S., as Zuckerberg suggested, and whether foreign influencers were using TikTok to advance their objectives.

“With over 110 million downloads in the U.S. alone,” the Schumer and Cotton letter read, “TikTok is a potential counterintelligence threat we cannot ignore. Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the U.S. and brief Congress on these findings.” They must be happy with Sen. Hawley’s hearings.

In her statement, TikTok GM Pappas offered the following assurances:

  • U.S. user data is stored in the United States with backup in Singapore — not China.
  • TikTok’s U.S. team does what’s best for the U.S. market, with “the independence to do so.”
  • The company is committed to operating with greater transparency.
  • California-based employees lead TikTok’s moderation efforts for the U.S.
  • TikTok uses machine learning tools and human content reviews.
  • Moderators review content for adherence to U.S. laws.
  • TikTok has a dedicated team focused on cybersecurity and privacy policies.
  • The company conducts internal and external reviews of its security practices.
  • TikTok is forming a committee of users to serve them responsibly.
  • The company has banned political advertising.

Both TikToc and YouTube have been stung by failing to follow the rules when it comes to the youth and children’s market. In February, TikTok agreed to pay $5.7 million to settle the FTC’s case which allege that, through the Musical.ly app, TikTok company illegally collected personal information from children. At the time it was the largest civil penalty ever obtained by the FTC in a case brought under the Children’s Online Privacy Protection Act (COPPA). The law requires websites and online services directed at children obtain parental consent before collecting personal information from kids under 13. That record was smashed in September, though, when Google and its YouTube subsidiary agreed to pay $170 million to settle allegations brought by the FTC and the New York Attorney General that YouTube was also collecting personal information from children without parental consent. The settlement required Google and YouTube to pay $136 million to the FTC and $34 million to New York.

Quality degrades when near-monopolies exist

What I am watching for here is whether (and how) TikTok and other social media platforms respond to these scandals by competing on privacy.

For example, in its early years Facebook lured users with the promise of privacy. It was eventually successful in defeating competitors that offered little in the way of privacy, such as MySpace, which fell from a high of 75.9 million users to 8 million today. But as Facebook developed a dominant position in social media through acquisition of competitors like Instagram or by amassing data, the quality of its privacy protections degraded. This is to be expected where near-monopolies exist and anticompetitive mergers are allowed to close.

Now perhaps the pendulum is swinging back. As privacy regulation and publicity around privacy transgressions increase, competitive forces may come back into play, forcing social media platforms to compete on the quality of their consumer privacy protections once again. That would be a great development for consumers.

 


© MoginRubin LLP

ARTICLE BY Jennifer M. Oliver of MoginRubin.
Edited by Tom Hagy for MoginRubin LLP.
For more on social media app privacy concerns, see the National Law Review Communications, Media & Internet law page.

CCPA Alert: California Attorney General Releases Draft Regulations

On October 10, 2019, the California Attorney General released the highly anticipated draft regulations for the California Consumer Privacy Act (CCPA). The regulations focus heavily on three main areas: 1) notices to consumers, 2) consumer requests and 3) verification requirements. While the regulations focus heavily on these three topics, they also discuss special rules for minors, non-discrimination standards and other aspects of the CCPA. Despite high hopes, the regulations do not provide the clarity many companies desired. Instead, the regulations layer on new requirements while sprinkling in further ambiguities.

The most surprising new requirements proposed in the regulations include:

  • New disclosure requirements for businesses that collect personal information from more than 4,000,000 consumers
  • Businesses must acknowledge the receipt of consumer requests within 10 days
  • Businesses must honor “Do Not Sell” requests within 15 days and inform any third parties who received the personal information of the request within 90 days
  • Businesses must obtain consumer consent to use personal information for a use not disclosed at the time of collection

The following are additional highlights from each of the three main areas:

1. Notices to consumers

The regulations discuss four types of notices to consumers: notice at the time of collection, notice of the right to opt-out of the sale of personal information, notice of financial incentives and a privacy policy. All required notices must be:

  • Easy to read in plain, straightforward language
  • In a format that draws the consumer’s attention to the notice
  • Accessible to those with disabilities
  • Available in all languages in which the company regularly conducts business

The regulations make clear that it is necessary, but not sufficient, to update your privacy policy to be compliant with CCPA. You must also provide notice to consumers at the time of data collection, which must be visible and accessible before any personal information is collected. The regulations make clear that no personal information may be collected without proper notice. You may use your privacy policy as the notice at the time of collection, but you must link to a specific section of your privacy policy that provides the statutorily required notice.

The regulations specifically provide that for offline collection, businesses could provide a paper version of the notice or post prominent signage. Similar to General Data Protection Regulation (GDPR), a company may only use personal information for the purposes identified at the time of collection. Otherwise, the business must obtain explicit consent to use the personal information for a new purpose.

In addition to the privacy policy requirements in the statute itself, the regulations require more privacy policy disclosures. For example, the business must include instructions on how to verify a consumer request and how to exercise consumer rights through an agent. Further, the privacy policy must identify the following information for each category of personal information collected: the sources of the information, how the information is used and the categories of third parties to whom the information is disclosed. For businesses that collect personal information of 4,000,000 or more consumers, the regulations require additional disclosures related to the number of consumer requests and the average response times. Given the additional nuances of the disclosure requirements, we recommend working with counsel to develop your privacy policy.

If a business provides financial incentives to a consumer for allowing the sale of their personal information, then the business must provide a notice of the financial incentive. The notice must include a description of the incentive, its material terms, instructions on how to opt-in to the incentive, how to withdraw from the incentive and an explanation of why the incentive is permitted by CCPA.

Finally, the regulations state that service providers that collect personal information on behalf of a business may not use that personal information for their own purposes. Instead, they are limited to performing only their obligations under the contract between the business and service provider. The contract between the parties must also include the provisions described in CCPA to ensure that the relationship is a service provider/business relationship, and not a sale of personal information between a business and third party.

2. Consumer requests

Businesses must provide at least two methods for consumers to submit requests (most commonly an online form and a toll-free number), and one of the methods must reflect the manner in which the business primarily interacts with the consumer. In addition, businesses that substantially interact with consumers offline must provide an offline method for consumers to exercise their right to opt-out, such as providing a paper form. The regulations specifically call out that in-person retailers may therefore need three methods: a paper form, an online form and a toll-free number.

The regulations do limit some consumer request rights by prohibiting the disclosure of Social Security numbers, driver’s license numbers, financial account numbers, medical-related identification numbers, passwords, and security questions and answers. Presumably, this is for two reasons: the individual should already know this information and most of these types of information are subject to exemptions from CCPA.

One of the most notable clarifications related to requests is that the 45-day timeline to respond to a consumer request includes any time required to verify the request. Additionally, the regulations introduce a new timeline requirement for consumer requests. Specifically, businesses must confirm receipt of a request within 10 days. Another new requirement is that businesses must respond to opt-out requests within 15 days and must inform all third parties to stop selling the consumer’s information within 90 days. Further, the regulations require that businesses maintain request records logs for 24 months.

3. Verification requirements

The most helpful guidance in the regulations relates to verification requests. The regulations provide that a more rigorous verification process should apply to more sensitive information. That is, businesses should not release sensitive information without being highly certain about the identity of the individual requesting the information. Businesses should, where possible, avoid collecting new personal information during the verification process and should instead rely on confirming information already in the business’ possession. Verification can be through a password-protected account provided that consumers re-authenticate themselves. For websites that provision accounts to users, requests must be made through that account. Matching two data points provided by the consumer with data points maintained by the business constitutes verification to a reasonable degree of certainty, and the matching of three data points constitutes a high degree of certainty.

The regulations also provide prescriptive steps of what to do in cases where an identity cannot be verified. For example, if a business cannot verify the identity of a person making a request for access, then the business may proceed as if the consumer requested disclosure of only the categories of personal information, as opposed to the content of such personal information. If a business cannot verify a request for deletion, then the business should treat the request as one to opt-out of the sale of personal information.

Next steps

These draft regulations add new wrinkles, and some clarity, to what is required for CCPA compliance. As we move closer to January 1, 2020 companies should continue to focus on preparing compliant disclosures and notices, finalizing their privacy policies and establishing procedures to handle consumer requests. Despite the need to press forward on compliance, the regulations are open to initial public comment until December 6, 2019, with a promise to finalize the regulations in the spring of 2020. We expect further clarity as these draft regulations go through the comment process and privacy professionals, attorneys, businesses and other stakeholders weigh in on their clarity and reasonableness.


Copyright © 2019 Godfrey & Kahn S.C.

For more on CCPA implementation, see the National Law Review Consumer Protection law page.

Can We Really Forget?

I expected this post would turn out differently.

I had intended to commend the European Court of Justice for placing sensible limits on the extraterritorial enforcement of the EU’s Right to be Forgotten. They did, albeit in a limited way,[1] and it was a good decision. There.  I did it. In 154 words.

Now for the remaining 1400 or so words.

But reading the decision pushes me back into frustration at the entire Right to be Forgotten regime and its illogical and destructive basis. The fact that a court recognizes the clear fact that the EU cannot (generally) force foreign companies to violate the laws of their own countries in internet sites that are intended for use within those countries (and NOT the EU), does not come close to offsetting the logical, practical and societal problems with the way the EU perceives and enforces the Right to be Forgotten.

As a lawyer, with all decisions grounded in the U.S. Constitution, I am comfortable with the First Amendment’s protection of Freedom of Speech – that nearly any truthful utterance or publication is inviolate, and that the foundation of our political and social system depends on open exposure of facts to sunlight. Intentionally shoving those true facts into the dark is wrong in our system and openness will be protected by U.S. courts.

Believe it or not, the European Union also has such a concept at the core of its foundation too. Article 10 of the European Convention on Human Rights states that:

“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

So we have the same values, right? In both jurisdictions the right to impart information can be exercised without interference by public authority.  Not so fast.  The EU contains a litany of restrictions on this right, including a limitation of your right to free speech by the policy to protect the reputation of others.

This seems like a complete evisceration of a right to open communication if a court can force obfuscation of facts just to protect someone’s reputation.  Does this person deserve a bad reputation? Has he or she committed a crime, failed to pay his or her debts, harmed animals or children, stalked an ex-lover, or violated an oath of office, marriage, priesthood or citizenship? It doesn’t much matter in the EU. The right of that person to hide his/her bad or dangerous behavior outweighs both the allegedly fundamental right to freedom to impart true information AND the public’s right to protect itself from someone who has proven himself/herself to be a risk to the community.

So how does this tension play out over the internet? In the EU, it is law that Google and other search engines must remove links to true facts about any wrongdoer who feels his/her reputation may be tarnished by the discovery of the truth about that person’s behavior. Get into a bar fight?  Don’t worry, the EU will put the entire force of law behind your request to wipe that off your record. Stiff your painting contractors for tens of thousands of Euros despite their good performance? Don’t worry, the EU will make sure nobody can find out . Get fired, removed from office or defrocked for dishonesty? Don’t worry, the EU has your back.

And that undercutting of speech rights has now been codified in Article 17 of Regulation 2016/679, the Right to be Forgotten.

And how does this new decision affect the rule? In the past couple weeks, the Grand Chamber of the EU Court of Justice issued an opinion limiting the extraterritorial reach of the Right to be Forgotten. (Google vs CNIL, Case C‑507/17) The decision confirms that search engines must remove links to certain embarrassing instances of true reporting, but must only do so on the versions of the search engine that are intentionally servicing the EU, and not necessarily in versions of the search engines for non-EU jurisdictions.

The problems with appointing Google to be an extrajudicial magistrate enforcing vague EU-granted rights under a highly ambiguous set of standards and then fining them when you don’t like a decision you forced them to make, deserve a separate post.

Why did we even need this decision? Because the French data privacy protection agency, known as CNIL, fined Google for not removing presumably true data from non-EU search results concerning, as Reuters described, “a satirical photomontage of a female politician, an article referring to someone as a public relations officer of the Church of Scientology, the placing under investigation of a male politician and the conviction of someone for sexual assaults against minors.”  So, to be clear, while the official French agency believes it should enforce a right for people to obscure that they have been convicted of sexual assault against children from the whole world, the Grand Chamber of the European Court of Justice believes that the people convicted child sexual assault should be protected in their right to obscure these facts only from people in Europe. This is progress.

Of course, in the U.S., politicians and other public figures, under investigation or subject to satire or people convicted of sexual assault against children do not have a right to protect their reputations by forcing Google to remove links to public records or stories in news outlets. We believe both that society is better when facts are allowed to be reported and disseminated and that society is protected by reporting on formal allegations against public figures or criminal convictions of private ones.

I am glad that the EU Court of Justice is willing to restrict rules to remain within its jurisdiction where they openly conflict with the basic laws of other jurisdictions. The Court sensibly held,

“The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.[2] . . . [T]he operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.”

Any other decision would be wildly overreaching. Believe me, every country in the EU would be howling in protest if the US decided that its views of personal privacy must be enforced in Europe by European companies due to operations aimed only to affect Europe. It should work both ways. So this was a well-reasoned limitation.

But I just cannot bring myself to be complimentary of a regime that I find so repugnant – where nearly any bad action can be swept under the rug in the name of protecting a person’s reputation.

As I have written in books and articles in the past, government protection of personal privacy is crucial for the clean and correct operation of a democracy.  However, privacy is also the obvious refuge of scoundrels – people prefer to keep the bad things they do private. Who wouldn’t? But one can go overboard protecting this right, and it feels like the EU has institutionalized its leap overboard.

I would rather err on the side of sunshine, giving up some privacy in the service of revealing the truth, than err on the side of darkness, allowing bad deeds to be obscured so that those who commit them can maintain their reputations.  Clearly, the EU doesn’t agree with me.


[1] The Court, in this case, wrote, “The issues at stake therefore do not require that the provisions of Directive 95/46 be applied outside the territory of the European Union. That does not mean, however, that EU law can never require a search engine such as Google to take action at worldwide level. I do not exclude the possibility that there may be situations in which the interest of the European Union requires the application of the provisions of Directive 95/46 beyond the territory of the European Union; but in a situation such as that of the present case, there is no reason to apply the provisions of Directive 95/46 in such a way.”

[2] EU Court of Justice case C-136/17, which states, “While the data subject’s rights [to privacy] override, as a general rule, the freedom of information of internet users, that balance may, however, depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information. . . .”

 


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more EU’s GDPR enforcement, see the National Law Review Communications, Media & Internet law page.

Head Hacking: New Devices Gather Brainspray

For more than a decade I have been warning about the vulnerability of brainspray – the brain signals that can be captured from outside your head. In 2008, this article by Jeffery Goldberg demonstrated that an fMRI machine could easily interpret how a person felt about stimuli provided – which could be a boon to totalitarian governments testing for people’s true feelings about the government or its Dear Leader. Of course in 2008 the fMRI costs two million dollars and you must lie still inside it for a useful reading to emerge.

While fMRI mind reading and lie detection is not yet ready for the courtroom, its interpretations are improving all the time and mobile units are under consideration. And its wearable cousins, like iWatches and computerized head gear are reading changes from within your body, such as electrocardiogram, heart rate, blood pressure, respiration rate, blood oxygen saturation, blood glucose, skin perspiration, capnography, body temperature, motion evaluation, cardiac implantable devices and ambient parameters. Certain head gear is calibrated just for brain waves.

Some of this is gaming equipment and some helps you meditate.  Biofeedback headsets measure your brain waves, using EEG. They’re small bands that sit easily on your head and measure activity through sensors. Several companies like MindWave, NeuroSky, Thync, and Versus all make such equipment available to the general public.

Of course, if you really want to frighten yourself about how far this technology has advances, check in on DARPA and the rest of the US Military. DARPA has been testing brainwave filtering binoculars , human brainwave driven targeting for killer robots,  and soldier brain-machine interfaces for military vehicles. And these are just the things they are currently willing to dicuss in public.

I wrote six years ago about how big companies like Honda were exploring brainspray capture, and have spoken about how Google, Facebook and other Silicon Valley giants have sunk billions of dollars into creating brain-machine interfaces and reading brainspray for practical purposes.

I will write more on this later, but be aware that hacking of this equipment is always possible, which could give the wrong people access to your brain waves and pick up if you are thinking of your bank account PIN or other sensitive matter. Your thoughts of any sort should be protected from view.  Thought-crime has always been on the other side of the line.

Now that it is possible to read your brainspray with greater certainty, we should be considering how to regulate this activity.  I don’t mind giving the search engine my information in exchange of efficient immediate searches.  But I don’t want to open my head to companies or government.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more in device hacking, see the Communications, Media & Internet law page on the National Law Review.

Ubers of the Future will Monitor Your Vital Signs

Uber has announced that it is considering developing self-driving cars that monitor passengers’ vital signs by asking the passengers how they feel during the ride, in order to provide a stress-free and satisfying trip. This concept was outlined in a patent filed by the company in July 2019. Uber envisions passengers connecting their own health-monitoring devices (e.g., smart watches, activity trackers, heart monitors, etc.) to the vehicle to measure the passenger’s reactions. The vehicle would then synthesize the information, along with other measurements that are taken by the car itself (e.g., thermometers, vehicle speed sensors, driving logs, infrared cameras, microphones, etc.). This type of biometric monitoring could potentially allow the vehicle to assess whether it might be going too fast, getting too close to another vehicle on the road, or applying the brakes too hard.  The goal is to use artificial intelligence to create a more ‘satisfying’ experience for the riders in the autonomous vehicle.

This proposed technology presents yet another way that ride-sharing companies such as Uber can collect more data from their passengers. Of course, passengers would have the choice about whether to use this feature, but this is another consideration for passengers in this data-driven industry.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more about self-driving cars, see the National Law Review Communications, Media & Internet law page.

Will Technology Return Shame to Our Society?

The sex police are out there on the streets
Make sure the pass laws are not broken

Undercover (of the Night)The Rolling Stones

So, now we know that browsing porn in “incognito” mode doesn’t prevent those sites from leaking your dirty data courtesy of the friendly folks at Google and Facebook.  93 per cent of porn sites leak user data to a third party. Of these, Google tracks about 74 per cent of the analyzed porn sites, while Oracle tracks nearly 24 per cent sites and Facebook tracks nearly 10 per cent porn sites.  Yet, despite such stats, 30 per cent of all internet traffic still relates to porn sites.

The hacker who perpetrated the enormous Capital One data beach outed herself by oversharing on GitHub.  Had she been able to keep her trap shut, we’d probably still not know that she was in our wallets.  Did she want to get caught, or was she simply unashamed of having stolen a Queen’s ransom worth of financial data?

Many have lamented that shame (along with irony, truth and proper grammar) is dead.  I disagree.  I think that shame has been on the outward leg of a boomerang trajectory fueled by technology and is accelerating on the return trip to whack us noobs in the back of our unsuspecting heads.

Technology has allowed us to do all sorts of stuff privately that we used to have to muster the gumption to do in public.  Buying Penthouse the old-fashioned way meant you had to brave the drugstore cashier, who could turn out to be a cheerleader at your high school or your Mom’s PTA friend.  Buying the Biggie Bag at Wendy’s meant enduring the disapproving stares of vegans buying salads and diet iced tea.  Let’s not even talk about ED medication or baldness cures.

All your petty vices and vanity purchases can now be indulged in the sanctity of your bedroom.  Or so you thought.  There is no free lunch, naked or otherwise, we are coming to find.  How will society respond?

Country music advises us to dance like no one is watching and to love like we’ll never get hurt. When we are alone, we can act closer to our baser instincts.  This is why privacy is protective of creativity and subversive behaviors, and why in societies without privacy, people’s behavior regresses toward the most socially acceptable responses.  As my partner Ted Claypoole wrote in Privacy in the Age of Big Data,

“We all behave differently when we know we are being watched and listened to, and the resulting change in behavior is simply a loss of freedom – the freedom to behave in a private and comfortable fashion; the freedom to allow the less socially -careful branches of our personalities to flower. Loss of privacy reduces the spectrum of choices we can make about the most important aspects of our lives.

By providing a broader range of choices, and by freeing our choices from immediate review and censure from society, privacy enables us to be creative and to make decisions about ourselves that are outside the mainstream. Privacy grants us the room to be as creative and thought-provoking as we want to be. British scholar and law dean Timothy Macklem succinctly argues that the “isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is a sponsor and guardian to the creative and the subversive.”

For the past two decades we have let down our guard, exercising our most subversive and embarrassing expressions of id in what we thought was a private space. Now we see that such privacy was likely an illusion, and we feel as if we’ve been somehow gas lighted into showing our noteworthy bad behavior in the disapproving public square.

Exposure of the Ashley Madison affair-seeking population should have taught us this lesson, but it seems that each generation needs to learn in its own way.

The nerds will, inevitably, figure out how to continue to work and play largely unobserved.  But what of the rest of us?  Will the pincer attack of the advancing surveillance state and the denizens of the Dark Web bring shame back as a countervailing force to govern our behavior?  Will the next decade be marked as the New Puritanism?

Dwight Lyman Moody, a predominant 19th century evangelist, author, and publisher, famously said, “Character is what you are in the dark.”  Through the night vision goggles of technology, more and more of your neighbors can see who you really are and there are very few of us who can bear that kind of scrutiny.  Maybe Mick Jagger had it right all the way back in 1983, when he advised “Curl up baby/Keep it all out of sight.”  Undercover of the night indeed.



Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.

Hush — They’re Listening to Us

Apple and Google have suspended their practice of reviewing recordings from users interacting with their voice assistant programs. Did you know this was happening to begin with?

These companies engaged in “grading,” a process where they review supposedly anonymized recordings of conversations people had with voice assistant program like Siri. A recent Guardian article revealed that these recordings were being passed on to service providers around the world to evaluate whether the voice assistant program was prompted intentionally, and the appropriateness of their responses to the questions users asked.

These recordings can include a user’s most private interactions and are vulnerable to being exposed. Google acknowledged “misconduct” regarding a leak of Dutch language conversation by one of its language experts contracted to refine its Google Assistant program.

Reports indicate around 1,000 conversations, captured by Google Assistant (available in Google Home smart speakers, Android devices and Chromebooks) being leaked to Belgian news outlet VRT NWS. Google audio snippets are not associated with particular user accounts as part of the review process, but some of those messages revealed sensitive information such as medical conditions and customer addresses.

Google will suspend using humans to review these recordings for at least three months, according to the Associated Press. This is yet another friendly reminder to Google Assistant users that they can turn off storing audio data to their Google account completely, or choose to auto-delete data after every three months or 18 months. Apple is also suspending grading and will review their process to improve their privacy practice.

Despite Google and Apple’s recent announcement, enforcement authorities are still looking to take action. German regulator, the Hamburg Commissioner for Data Protection and Freedom of Information, notified Google of their plan to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure.” Since the GDPR’s implementation, we haven’t seen this enforcement action utilized, but its impact is significant as it allows the enforcement authorities to halt data processing when there is “an urgent need to act in order to protect the rights and freedoms of data subjects.”

While Google allows users to opt out of some uses of their recordings; Apple has not provided users that ability other than by disabling Siri entirely. Neither privacy policy explicitly warned users of these recordings but do reserve the right to use the information collected to improve their services. Apple, however, disclosed that they will soon provide a software update to allow Siri users opt-out of participation in grading.

Since we’re talking about Google Assistant and Siri, we have to mention the third member of the voice assistant triumvirate, Amazon’s Alexa. Amazon employs temporary workers to transcribe the voice commands of its Alexa. Users can opt out of “Help[ing] Improve Amazon Services and Develop New Features” and allowing their voice recordings to be evaluated.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Control Freaks and Bond Villains

The hippy ethos that birthed early management of the internet is beginning to look quaint. Even as a military project, the core internet concept was a decentralized network of unlimited nodes that could reroute itself around danger and destruction. No one could control it because no one could truly manage it. And that was the primary feature, not a bug.

Well, not anymore.

I suppose it shouldn’t surprise us that the forces insisting on dominating their societies are generally opposed to an open internet where all information can be free. Dictators gonna dictate.

Beginning July 17, 2019, the government of Kazakhstan began intercepting all HTTPS internet traffic inside its borders. Local Kazakh ISPs must force their users to install a government-issued certificate into all devices to allow local government agents to decrypt users’ HTTPS traffic, examine its content, re-encrypt with a government certificate and send it on to its intended destination. This is the electronic equivalent of opening every envelope, photocopying the material inside, stuffing that material in a government envelope and (sometimes) sending it to the expected recipient. Except with web sites.

According to ZDNet, the Kazakh government, unsurprisingly, said the measure was “aimed at enhancing the protection of citizens, government bodies and private companies from hacker attacks, Internet fraudsters and other types of cyber threats.” As Robin Hood could have told you, the Sheriff’s actions taken to protect travelers and control brigands can easily result in government control of all traffic and information, especially when that was the plan all along. Security Boulevard reports that “Since Wednesday, all internet users in Kazakhstan have been redirected to a page instructing users to download and install the new certificate.

This is not the first time that Kazakhstan has attempted to force its citizens to install root certificate, and in 2015 the Kazakhs even applied with Mozilla to have Kazakh root certificate included in Firefox (Mozilla politely declined).

Despite creative technical solutions, we all know that Kazakhstan is not alone in restricting the internet access of its citizens. For one (gargantuan) example, China’s population of 800 million has deeply restricted internet access, and, according to the Washington Post, the Chinese citizenry can’t access Google, Facebook, YouTube or the New York Times, among many, many, many others. The Great Firewall of China, which involves legislation, government monitoring action, technology limitations and cooperation from internet and telecommunications companies. China recently clamped down on WhatsApp and VPNs, which had returned a modicum of control and privacy to the people. And China has taken these efforts two steps beyond nearly anyone else in the world by building a culture of investigation and shame, where its citizens could find their pictures on local billboard for boorish traffic or internet behavior, or in jail for questioning the ruling party on the internet. All this is well documented.

23 countries in Asia and 7 in Africa restrict torrents, pornography, political media and social media. The only two European nations that have the same restrictions are Turkey and Belarus. Politicians in the U.S. and Europe had hoped that the internet would serve as a force for freedom, knowledge and unlimited communications. Countries like Russia, Cuba and Nigeria also see the internet’s potential, but they prefer to throttle the net to choke off this potential threat to their one-party rule governments.

For these countries, there is no such thing as private. They think of privacy in context – you may keep thoughts or actions private from companies, but not the government. On the micro level, it reminds me of family dynamics –When your teenagers talk about privacy, they mean keeping information private from the adults in their lives, not friends, strangers, or even companies. Controlling governments sing the song of privacy, as long as information is not kept from them, it can be hidden from others.

The promise of Internet freedom is slipping further away from more people each year as dictators and real life versions of movie villains figure out how to use the technology for surveillance of everyday people and how to limit access to “dangerous” ideas of liberty. ICANN, the internet control organization set up by the U.S. two decades ago, has proven itself bloated and ineffective to protect the interests of private internet users.  In fact, it would be surprising if the current leaders of ICANN even felt that such protections were within its purview.

The internet is truly a global phenomenon, but it is managed at local levels, leaving certain populations vulnerable to spying and manipulation by their own governments. Those running the system seem to have resigned themselves to allowing national governments to greatly restrict the human rights of their own citizens.

A tool can be used in many different ways.  A hammer can help build a beautiful home or can be the implement of torture and murder. The internet can be a tool for freedom of thought and expression, where everyone has a publishing and communication platform.  Or it can be a tool for repression. We have come to accept more of the latter than I believed possible.

Post Script —

Also, after a harrowing last 2-5 years where freedom to speak on the internet (and social media) has exploded into horrible real-life consequences, large and small, even the most libertarian and laissez faire of First World residents is slapping the screen to find some way to moderate the flow of ignorance, evil, insanity, inanity and stupidity. This is the other side of the story and fodder for a different post.

And it is also probably time to run an updated discussion of ICANN and its role in internet management.  We heard a great deal about internet leadership in 2016, but not so much lately. Stay Tuned.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more global & domestic internet developments, see the National Law Review Communications, Medis & Intenet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.