California May Be Headed Towards Sweeping Consumer Privacy Protections

On June 21st, California legislature Democrats reached a tentative agreement with a group of consumer privacy activists spearheading a ballot initiative for heightened consumer privacy protections, in which the activists would withdraw the the existing ballot initiative in exchange for the California legislature passing, and Governor Jerry Brown signing into law, a similar piece of legislation, with some concessions, by June 28th, the final deadline to withdraw ballot initiatives.  If enacted, the Act would take effect January 1, 2020.

In the “compromise bill”, Assemblyman Ed Chau (D-Arcadia) amended the California Consumer Privacy Act of 2018, (AB 375) to ensure the consumer privacy activists, and conversely ballot initiative opponents, would be comfortable with its terms.

Some of the key consumer rights allotted for in AB 375 include:

  • A consumer’s right to request deletion of personal information which would require the business to delete information upon receipt of a verified request;

  • A consumer’s right to request that a business that sells the consumer’s personal information, or discloses it for a business purpose, disclose the categories of information that it collects and categories of information and the identity of any 3rd parties to which the information was sold or disclosed;

  • A consumer’s right to opt-out of the sale of personal information by a business prohibiting the business from discriminating against the consumer for exercising this right, including a prohibition on charging the consumer who opts-out a different price or providing the consumer a different quality of goods or services, except if the difference is reasonably related to value provided by the consumer’s data.

Covered entities under AB 375 would include, any entity that does business in the State of California and satisfies one or more of the following: (i) annual gross revenue in excess of $25 million, (ii) alone or in combination, annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices, OR (iii) Derives 50 percent or more of its annual revenues from selling consumers’ personal information.

Though far reaching, the amended AB 375 limits legal damages and provides significant concessions to business opponents of the bill. For example, the bill allows a business 30 days to “cure” any alleged violations prior to the California attorney general initiating legal action. Similarly, while a private action is permissible, a consumer is required to provide a business 30 days written notice before instituting an action, during which time the business has the same 30 days to “cure” any alleged violations.  Specifically, the bill provides: “In the event a cure is possible, if within the 30 days the business actually cures the noticed violation and provides the consumer an express written statement that the violations have been cured and that no further violations shall occur, no action for individual statutory damages or class-wide statutory damages may be initiated against the business.”  Civil penalties for actions brought by the Attorney General are capped at $7,500 for each intentional violation.  The damages in any private action brought by a consumer are not less than one hundred dollars ($100) and not greater than seven hundred and fifty ($750) per consumer per incident or actual damages, whichever is greater.

Overall, consumer privacy advocates are pleased with the amended legislation which is “substantially similar to our initiative”, said Alastair Mactaggart, a San Francisco real estate developer leading the ballot initiative. “It gives more privacy protection in some areas, and less in others.”

The consumer rights allotted for in the amended version of the California Consumer Privacy Act of 2018, are reminiscent of those found in the European Union’s sweeping privacy regulations, the General Data Protection Regulation (“GDPR”) (See Does the GDPR Apply to Your U.S. Based Company?), that took effect May 25th. Moreover, California is not the only United States locality considering far reaching privacy protections. Recently, the Chicago City Council introduced the Personal Data Collection and Protection Ordinance, which, inter alia, would require opt-in consent from Chicago residents to use, disclose or sell their personal information. On the federal level, several legislative proposals are being considered to heighten consumer privacy protection, including the Consumer Privacy Protection Act, and the Data Security and Breach Notification Act.

 

Jackson Lewis P.C. © 2018
This post was written by Joseph J. Lazzarotti of Jackson Lewis P.C.

GDPR May 25th Deadline Approaching – Businesses Globally Will Feel Impact

In less than four months, the General Data Protection Regulation (the “GDPR” or the “Regulation”) will take effect in the European Union/European Economic Area, giving individuals in the EU/EEA greater control over their personal data and imposing a sweeping set of privacy and data protection rules on data controllers and data processors alike. Failure to comply with the Regulation’s requirements could result in substantial fines of up to the greater of €20 million or 4% of a company’s annual worldwide gross revenues. Although many American companies that do not have a physical presence in the EU/EEA may have been ignoring GDPR compliance based on the mistaken belief that the Regulation’s burdens and obligations do not apply outside of the EU/EEA, they are doing so at their own peril.

A common misconception is that the Regulation only applies to EU/EEA-based corporations or multinational corporations with operations within the EU/EEA. However, the GDPR’s broad reach applies to any company that is offering goods or services to individuals located within the EU/EEA or monitoring the behavior of individuals in the EU/EEA, even if the company is located outside of the European territory. All companies within the GDPR’s ambit also must ensure that their data processors (i.e., vendors and other partners) process all personal data on the companies’ behalf in accordance with the Regulation, and are fully liable for any damage caused by their vendors’ non-compliant processing. Unsurprisingly, companies are using indemnity and insurance clauses in data processing agreements with their vendors to contractually shift any damages caused by non-compliant processing activities back onto the non-compliant processors, even if those vendors are not located in the EU/EEA. As a result, many American organizations that do not have direct operations in the EU/EEA nevertheless will need to comply with the GDPR because they are receiving, storing, using, or otherwise processing personal data on behalf of customers or business partners that are subject to the Regulation and its penalties. Indeed, all companies with a direct or indirect connection to the EU/EEA – including business relationships with entities that are covered by the Regulation – should be assessing the potential implications of the GDPR for their businesses.

Compliance with the Regulation is a substantial undertaking that, for most organizations, necessitates a wide range of changes, including:

  • Implementing “Privacy by Default” and “Privacy by Design”;
  • Maintaining appropriate data security;
  • Notifying European data protection agencies and consumers of data breaches on an expedited basis;
  • Taking responsibility for the security and processing of third-party vendors;
  • Conducting “Data Protection Impact Assessments” on new processing activities;
  • Instituting safeguards for cross-border transfers; and
  • Recordkeeping sufficient to demonstrate compliance on demand.

Failure to comply with the Regulation’s requirements carries significant risk. Most prominently, the GDPR empowers regulators to impose fines for non-compliance of up to the greater of €20 million or 4% of worldwide annual gross revenue. In addition to fines, regulators also may block non-compliant companies from accessing the EU/EEA marketplace through a variety of legal and technological methods. Even setting these potential penalties aside, simply being investigated for a potential GDPR violation will be costly, burdensome and disruptive, since during a pending investigation regulators have the authority to demand records demonstrating a company’s compliance, impose temporary data processing bans, and suspend cross-border data flows.

The impending May 25, 2018 deadline means that there are only a few months left for companies to get their compliance programs in place before regulators begin enforcement. In light of the substantial regulatory penalties and serious contractual implications of non-compliance, any company that could be required to meet the Regulation’s obligations should be assessing their current operations and implementing the necessary controls to ensure that they are processing personal data in a GDPR-compliant manner.

 

© 2018 Neal, Gerber & Eisenberg LLP.
More on the GDPR at the NLR European Union Jurisdiction Page.

Privacy Hat Trick: Three New State Laws to Juggle

Nevada, Oregon and New Jersey recently passed laws focusing on the collection of consumer information, serving as a reminder for advertisers, retailers, publishers and data collectors to keep up-to-date, accurate and compliant privacy and information collection policies.

Nevada: A Website Privacy Notice is Required

Nevada joined California and Delaware in explicitly requiring websites and online services to post an accessible privacy notice. The Nevada law, effective October 1, 2017, requires disclosure of the following:

  • The categories of “covered information” collected about consumers who visit the website or online service;

  • The categories of third parties with whom the operator may share such information;

  • A description of the process, if any, through which consumers may review and request changes to their information;

  • A description of the process by which operators will notify consumers of material changes to the notice;

  • Whether a third party may collect covered information about the consumer’s online activities over time and across different Internet websites or online services; and

  • The effective date of the notice.

“Covered Information” is defined to include a consumer’s name, address, email address, telephone number, social security number, an identifier that allows a specific person to be contacted physically or online, and any other information concerning a person maintained by the operator in combination with an identifier.

Takeaway: Website and online service operators (including Ad Techs and other data collectors) should review their privacy policies to ensure they are disclosing all collection of information that identifies, can be used to contact, or that is combined with information that identifies consumers. Website operators should also be sure that they are aware of, and are properly disclosing, any information that is shared with or collected by their third-party service providers and how that information is used.

Oregon: Misrepresentation of Privacy Practices = Unlawful Trade Practice.

Oregon expanded its definition of an “unlawful trade practice”, effective January 1, 2018, to expressly include using, disclosing, collecting, maintaining, deleting or disposing of information in a manner materially inconsistent with any statement or representation published on a business’s website or in a consumer agreement related to a consumer transaction.The new Oregon law is broader than other similar state laws, which limit their application to “personal information”. Oregon’s law, which does not define “information”, could apply to misrepresentations about any information collection practices, even if not related to consumer personal information.

Takeaway: Businesses should be mindful when drafting privacy policies, terms of use, sweepstakes and contest rules and other consumer-facing policies and statements not to misrepresent their practices with respect to any information collected, not just personal information.

New Jersey: ID Cards Can Only be Scanned for Limited Purposes (not Advertising)

New Jersey’s new Personal Information and Privacy Protection Act, effective October 1, 2017, limits the purposes for which a retail establishment may scan a person’s identification card to the following:

  • To verify the authenticity of the card or the identity of the person paying for goods or services with a method other than cash, returning an item or requesting a refund or exchange;

  • To verify the person’s age when providing age-restricted goods or services to the person;

  • To prevent fraud or other criminal activity using a fraud prevention service company or system if the person returns an item or requests a refund or exchange;

  • To prevent fraud or other criminal activity related to a credit transaction to open or manage a credit account;

  • To establish or maintain a contractual relationship;

  • To record, retain, or transmit information required by State or federal law;

  • To transmit information to a consumer reporting agency, financial institution, or debt collector to be used as permitted by the Fair Credit Reporting Act and the Fair Debt Collection Practices Act; or

  • To record, retain, or transmit information governed by the medical privacy and security rules of the Health Insurance Portability and Accountability Act.

The law also prohibits the retention of information scanned from an identification card for verification purposes and specifically prohibits the sharing of information scanned from an identification card with a third party for marketing, advertising or promotional activities, or any other purpose not specified above. The law does make an exception to permit a retailer’s automated return fraud system to share ID information with a third party for purposes of issuing a reward coupon to a loyal customer.

Takeaway: Retail establishments with locations in New Jersey should review their point-of-sale practices to ensure they are not scanning ID cards for marketing, advertising, promotional or any other purposes not permitted by the New Jersey law.

Read more legal analysis at the National Law Review.

This post was written byJulie Erin Rubash of  Sheppard Mullin Richter & Hampton LLP.

“WannaCry” Ransomware Attack Causes Disruption Globally – With Worst Yet to Come

A ransomware known as “WannaCry” affected 200,000 people in 150 countries over the weekend, locking computer files and demanding payment to release them. As of this morning, Australia and New Zealand users seem to have avoided the brunt of the attack, with the Federal Government only confirming three reports of Australian companies being affected.  Not that ransomware attacks tend to be the subject of reporting – there is quite a high rate of payment of affected users as the pricing is deliberately cheaper than most alternatives unless your back-up process is very good.

The ransomware utilises vulnerabilities in out-of-date, unpatched versions of Microsoft Windows to infect devices. It spreads from computer for computer as it finds exposed targets, without the user having to open an e-mail attachment or click a link as is commonplace in most attacks. Ransom demands start at US$300 and doubles after three days.

The U.K. National Health Service (NHS) was among the worst hit organisations, forcing hospitals to cancel appointments and delay operations as they could not access their patients’ medical records. The Telegraph suggested that 90 percent of NHS trusts were using a 16 year old version of Windows XP which was particularly vulnerable to the attack. More attacks are anticipated throughout the working week as companies and organisations turn on their devices.

The U.K. National Cyber Security Center has released guidance to help both home users and organisations limit the impact of the attacks. It can be read here.

Edwin Tan is co-author of this article. 

The Department Of Homeland Security Proposes New Rules Affecting Federal Government Contractors

This week, the Department of Homeland Security (“DHS”) issued three proposed rules expanding data security and privacy requirements for contractors and subcontractors. The proposed rules build upon other recent efforts by various federal agencies to strengthen safeguarding requirements for sensitive government information.  Given the increasing emphasis on data security and privacy, contractors and subcontractors are well advised to familiarize themselves with these new requirements and undertake a careful review of their current data security and privacy procedures to ensure they comply.

  • Privacy Training

DHS contracts currently require contractor and subcontractor employees to complete privacy training before accessing a Government system of records; handling Personally Identifiable Information and/or Sensitive Personally Identifiable Information; or designing, developing, maintaining, or operating a Government system of records. DHS proposes including this training requirement in the Homeland Security Acquisition Regulation (“HSAR”) and to make the training more easily accessible by hosting it on a public website.  By including the rule in the HSAR, DHS would standardize the obligation across all DHS contracts.  The new rule would require the training to be completed within thirty days of the award of a contract and on an annual basis thereafter.

DHS invites comment on the proposed rule. In particular, DHS asks commenters to offer their views on the burden, if any associated with the requirement to complete DHS-developed privacy training.  DHS also asks whether the industry should be given the flexibility to develop its own privacy training.  Comments must be submitted on or before March 20, 2017.

  • Information Technology Security Awareness Training

DHS currently requires contractor and subcontractor employees to complete information technology security awareness training before accessing DHS information systems and information resources. DHS proposes to amend the HSAR to require IT security awareness training for all contractor and subcontractor employees who will access (1) DHS information systems and information resources or (2) contractor owned and/or operated information systems and information resources capable of collecting, processing, storing or transmitting controlled unclassified information (“CUI”) (defined below).  DHS will require employees to undergo training and to sign DHS’s Rules of Behavior (“RoB”) before they are granted access to those systems and resources.  DHS also proposes to make this training and the RoB more easily accessible by hosting them on a public website.  Thereafter, annual training will be required.  In addition, contractors will be required to submit training certification and signed copies of the RoB to the contracting officer and maintain copies in their own records.

Through this proposed rule, DHS intends to require contractors to identify employees who will require access, to ensure that those employees complete training before they are granted access and annually thereafter, to provide to the government and maintain evidence that training has been conducted. Comments on the proposed rule are due on or before March 20, 2017.

  • Safeguarding of Controlled Unclassified Information

DHS’s third proposed rule will implement new security and privacy measures, including handling and incident reporting requirements, in order to better safeguard CUI. According to DHS, “[r]ecent high-profile breaches of Federal information further demonstrate the need to ensure that information security protections are clearly, effectively, and consistently addressed in contracts.”  Accordingly, the proposed rule – which addresses specific safeguarding requirements outlined in an Office of Management and Budget document outlining policy on managing government data – is intended to “strengthen[] and expand[]” upon existing HSAR language.

DHS’s proposed rule broadly defines “CUI” as “any information the Government creates or possesses, or an entity creates or possesses for or on behalf of the Government (other than classified information) that a law, regulation, or Government-wide policy requires or permits an agency to handle using safeguarding or dissemination controls[,]” including any “such information which, if lost, misused, disclosed, or, without authorization is accessed, or modified, could adversely affect the national or homeland security interest, the conduct of Federal programs, or the privacy of individuals.” The new safeguarding requirements, which apply to both contractors and subcontractors, include mandatory contract clauses; collection, processing, storage, and transmittal guidelines (which incorporate by reference any existing DHS policies and procedures); incident reporting timelines; and inspection provisions. Comments on the proposed rule are due on or before March 20, 2017.

  • Other Recent Efforts To Safeguard Contract Information

DHS’s new rules follow a number of other recent efforts by the federal government to better control CUI and other sensitive government information.

Last fall, for example, the National Archives and Record Administration (“NARA”) issued a final rule standardizing marking and handling requirements for CUI. The final rule, which went into effect on November 14, 2016, clarifies and standardizes the treatment of CUI across the federal government.

NARA’s final rule defines “CUI” as an intermediate level of protected information between classified information and uncontrolled information.  As defined, it includes such broad categories of information as proprietary information, export-controlled information, and certain information relating to legal proceedings.  The final rule also makes an important distinction between two types of systems that process, store or transmit CUI:  (1) information systems “used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency”; and (2) other systems that are not operated on behalf of an agency but that otherwise store, transmit, or process CUI.

Although the final rule directly applies only to federal agencies, it directs agencies to include CUI protection requirements in all federal agreements (including contracts, grants and licenses) that may involve such information.  As a result, its requirements indirectly extend to government contractors.  At the same time, however, it is likely that some government contractor systems will fall into the second category of systems and will not have to abide by the final rule’s restrictions.  A pending FAR case and anticipated forthcoming FAR regulation will further implement this directive for federal contractors.

Similarly, last year the Department of Defense (“DOD”), General Services Administration, and the National Aeronautics and Space Administration issued a new subpart and contract clause (52.204-21) to the FAR “for the basic safeguarding of contractor information systems that process, store, or transmit Federal contract information.”  The provision adds a number of new information security controls with which contractors must comply.

DOD’s final rule imposes a set of fifteen “basic” security controls for covered “contractor information systems” upon which “Federal contract information” transits or resides.  The new controls include: (1) limiting access to the information to authorized users; (2) limiting information system access to the types of transactions and functions that authorized users are permitted to execute; (3) verifying controls on connections to external information systems; (4) imposing controls on information that is posted or processed on publicly accessible information systems; (5) identifying information system users and processes acting on behalf of users or devices; (6) authenticating or verifying the identities of users, processes, and devices before allowing access to an information system; (7) sanitizing or destroying information system media containing Federal contract information before disposal, release, or reuse; (8) limiting physical access to information systems, equipment, and operating environments to authorized individuals; (9) escorting visitors and monitoring visitor activity, maintaining audit logs of physical access, and controlling and managing physical access devices; (10) monitoring, controlling, and protecting organizational communications at external boundaries and key internal boundaries of information systems; (11) implementing sub networks for publically accessible system components that are physically or logically separated from internal networks; (12) identifying, reporting, and correcting information and information system flaws in a timely manner; (13) providing protection from malicious code at appropriate locations within organizational information systems; (14) updating malicious code protection mechanisms when new releases are available; and (15) performing periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened, or executed.

“Federal contract information” is broadly defined to include any information provided by or generated for the federal government under a government contract.  It does not, however, include either:  (1) information provided by the Government to the public, such as on a website; or (2) simple transactional information, such as that needed to process payments.  A “covered contractor information system” is defined as one that is:  (1) owned or operated by a contractor; and (2) “possesses, stores, or transmits” Federal contract information.

ARTICLE BY Connie N BertramAmy Blackwood & Emilie Adams of Proskauer Rose LLP

Law Firm Data Breaches: Big Law, Big Data, Big Problem

law firm data breachesThe Year of the Breach

2016 was the year that law firm data breaches landed and stayed squarely in both the national and international headlines. There have been numerous law firm data breaches involving incidents ranging from lost or stolen laptops and other portable media to deep intrusions exposing everything in the law firm’s network. In March, the FBI issued a warning that a cybercrime insider-trading scheme was targeting international law firms to gain non-public information to be used for financial gain. In April, perhaps the largest volume data breach of all time involved law firm Mossack Fonesca in Panama. Millions of documents and terabytes of leaked data aired the (dirty) laundry of dozens of companies, celebrities and global leaders. Finally, Chicago law firm, Johnson & Bell Ltd., was in the news in December when a proposed class action accusing them of failing to protect client data was unsealed.

A Duty to Safeguard

Law firms are warehouses of client information and how that information is protected is being increasingly regulated and scrutinized. The legal ethics rules require attorneys to take competent and reasonable measures to safeguard information relating to client. (ABA Model Rules 1.1, 1.6 and Comments). Attorneys also have contractual and regulatory obligations to protect information relating to clients and other personally identifiable information, financial and health, for example.

American Bar Association’s 2016 TechReport

Annually, the ABA conducts a Legal Technology Survey (Survey) to gauge the state of our industry vis-à-vis technology and data security. The Survey revealed that the largest firms (500 or more attorneys) reported experiencing the most security breaches, with 26% of respondents admitting they had experienced some type of breach. This is a generally upward trend from past years and analysts expect this number only to rise. This is likely because larger firms have more people, more technology and more data so there is a greater exposure surface and many more risk touch-points.

Consequences of Breach

The most serious consequence of a law firm security breach is loss or unauthorized access to sensitive client data. However, the Survey shows there was a low incidence of this, only about 2% of breaches overall resulted in loss of client data. Other concerning consequences of the breaches are significant though. 37% reported business downtime/loss of billable hours, 28% reported hefty fees for correction including consulting fees, 22% reported costs associated with having to replace hardware/software, and 14% reported loss of important files and information.

Employing & Increasing Safeguards Commonly Used in other Industries

The 2016 Survey shows that while many law firms are employing some safeguards and generally increasing and diversifying their use of those safeguards, our industry may not be using common security measures that other industries employ.

1. Programs and Policies. The first step of any organization in protecting its data is establishing a comprehensive data security program. Security programs should include measures to prevent breaches (like policies that regulate the use of technology) and measures to identify, protect, detect, respond to and recover from data breaches and security incidents. Any program should designate an individual, like a full-time privacy officer or information security director, who is responsible for coordinating security. However, the numbers show that the legal industry may not be up to speed on this basic need. Survey respondents reported their firms had the following documented policies:

Document or records management and retention policy: 56%

Email use policy: 49%

Internet use/computer use policy: 41%

Social media use: 34%

2. Assessments. Using security assessments conducted by independent third parties has been a growing security practice for other industries; however, law firms have been slow to adopt this security tool, with only 18% of law firms overall reporting that they had a full assessment.

3. Standards/Frameworks. Other industries use security standards and frameworks, like those published by the International Organization for Standardization (ISO) to provide approaches to information security programs or to seek formal security certification from one of these bodies. Overall, only 5% of law firms reported that they have received such a certification.

4. Encryption. Security professionals view encryption as a basic safeguard that should be widely deployed and it is increasingly being required by law for any personal information; however only 38% of overall respondents reported use of file encryption and only 15% use drive encryption. Email encryption has become inexpensive for businesses and easier to use with commercial email services yet overall only 26% of respondents reported using email encryption with confidential/privileged communications or documents sent to clients.

5. Cybersecurity Insurance. Many general liability and malpractice polices do not cover security incidents or data breaches, thus there is an increasing need for business to supplement their coverage with cybersecurity insurance. Unfortunately, only 17% of attorneys reported that they have cyber coverage.

Conclusion

It is important to note that the figures revealed by the 2016 Survey, while dismaying, may also be extremely conservative as law firms have a vested interest in keeping a breach of their client’s data as quiet as possible. There is also the very real possibility that many firms don’t yet know that they have been breached. The 2016 Survey demonstrates that there is still a lot of room for improvement in the privacy and data security space for law firms. As law firms continue to make the news for these types of incidents it is likely that improvement will come sooner rather than later.

President Donald J. Trump – What Lies Ahead for Privacy, Cybersecurity, e-Communication?

President TrumpFollowing a brutal campaign – one laced with Wikileaks’ email dumps, confidential Clinton emails left unprotected, flurries of Twitter and other social media activity – it will be interesting to see how a Trump Administration will address the serious issues of privacy, cybersecurity and electronic communications, including in social media.

Mr. Trump had not been too specific with many of his positions while campaigning, so it is difficult to have a sense of where his administration might focus. But, one place to look is his campaign website where the now President-elect outlined a vision, summarized as follows:

  • Order an immediate review of all U.S. cyber defenses and vulnerabilities by individuals from the military, law enforcement, and the private sector, the “Cyber Review Team.”

  • The Cyber Review Team will provide specific recommendations for safeguarding with the best defense technologies tailored to the likely threats.

  • The Cyber Review Team will establish detailed protocols and mandatory cyber awareness training for all government employees.

  • Instruct the U.S. Department of Justice to coordinate responses to cyber threats.

  • Develop the offensive cyber capabilities we need to deter attacks by both state and non-state actors and, if necessary, to respond appropriately.

There is nothing new here as these positions appear generally to continue the work of prior administrations in the area of cybersecurity. Perhaps insight into President-elect Trump’s direction in these areas will be influenced by his campaign experiences.

Should we expect a tightening of cybersecurity requirements through new statutes and regulations?

Mr. Trump has expressed a desire to reduce regulation, not increase it. However, political party hackings and unfavorable email dumps from Wikileaks, coupled with continued data breaches affecting private and public sector entities, may prompt his administration and Congress to do more. Politics aside, cybersecurity clearly is a top national security threat, and it is having a significant impact on private sector risk management strategies and individual security. Some additional regulation may be coming.

An important question for many, especially for organizations that have suffered a multi-state data breach, is whether we will see a federal data breach notification standard, one that would “trump” the current patchwork of state laws. With Republicans in control of the executive and legislative branches, at least for the next two years, and considering the past legislative activity in this area, a federal law on data breach notification that supersedes state law does not seem likely.

Should we expect an expansion of privacy rights or other protections for electronic communication such as email or social media communication?

Again, much has been made of the disclosure of private email during the campaign, and President-elect Trump is famous (or infamous) for his use of social media, particularly his Twitter account. For some time, however, many have expressed concern that federal laws such as the Electronic Communications Privacy Act and the Stored Communications Act are in need of significant updates to address new technologies and usage, while others continue to have questions about the application of the Communications Decency Act. We also have seen an increase in scrutiny over the content of electronic communications by the National Labor Relations Board, and more than twenty states have passed laws concerning the privacy of social media and online personal accounts. Meanwhile, the emergence of Big Data, artificial intelligence, IoT, cognitive computing and other technologies continue to spur significant privacy questions about the collection and use of data.

While there may be a tightening of the rules concerning how certain federal employees handle work emails, based on what we have seen, it does not appear at this point that a Trump Administration will make these issues a priority for the private sector.

We’ll just have to wait and see.

Jackson Lewis P.C. © 2016

IP Addresses Constitute Personal Data According to Court of Justice of European Union

IP AddressesIn a decision dated 19 October 2016, the Court of Justice of the European Union (CJEU) has provided much needed clarification on a long-standing issue in EU data protection law.

A German politician brought an action concerning websites operated by the Federal Republic of Germany that stored personal data, including IP addresses, on logfiles for two weeks.  The question before the CJEU was – are IP addresses personal data?  According to Article 2(a) of EU Directive 95/46personal data” is any information relating to an identified or identifiable natural person. An identifiable person is one who can be identified, directly or indirectly from the data.

The CJEU ruled that dynamic IP addresses constitute personal data for an online media service provider (here the Federal Republic of Germany) that makes a website accessible.

A dynamic IP address means that the computer’s IP address is newly assigned each time the website is visited.  Unlike static IP addresses, it is not possible for dynamic IP addresses, using only files which are accessible to the public, to create an identifiable link between the user’s computer and the physical connection to the internet provider’s network . Hence, the data included in a dynamic IP address does not enable the online media service provider to identify the user.

However, according to the CJEU, a dynamic IP address will be personal data if the additional data necessary to identify the user of a website is stored by the user’s internet service provider. The website provider only needs to have the legal means which enables him to identify the user. Legal means are, for example cyber attacks and does not have to be applicable for the specific case.

This decision has significant practical implications for all website providers, because the storing of user information by internet service providers falls under data protection laws. Ultimately, the website provider needs the consent of the user to store the dynamic IP address. This will also apply after the General Data Protection Regulation (GDPR) comes into force in May 2018, because Article 2 of Directive 95/46 is incorporated in almost the same words in Article 4 (1) of the GDPR.

© Copyright 2016 Squire Patton Boggs (US) LLP

Espionage and Export Controls: iPhone Hack Highlights New World of Warfare

iPhone HackLast week, researchers at Citizen Lab uncovered sophisticated new spyware that allowed hackers to take complete control of anyone’s iPhone, turning the phone into a pocket-spy to intercept communications, track movements and harvest personal data. The malicious software, codenamed “Pegasus,” is believed to have been developed by the NSO Group, an Israeli company (whose majority shareholder is a San Francisco based private equity firm) that describes itself as a “leader in cyber warfare” and sells its software — with a price tag of $1 million – primarily to foreign governments. The software apparently took advantage of three previously unknown security flaws in Apple’s iOS software, and was described by experts as “the most sophisticated” ever seen on the market. Apple quickly released a patch of its software, iOS 9.3.5, and urged users to download it immediately.

Citizen Lab learned about Pegasus from Ahmed Mansoor, a UAE human rights activist, who received text messages baiting him to click on a link to discover “new secrets about the torture” of Emirati prisoners. Mr. Mansoor had been prey to hackers before, so he contacted Citizen Lab. When researchers tested the link, they discovered software had been remotely implanted onto the phone, and brought in Lookout, a mobile security firm, to reverse-engineer the spyware. Citizen Lab later identified the same software as having been used to track a Mexican journalist whose writings have criticized Mexico’s President. Citizen Lab and Lookout also determined that Pegasus could have been used across Turkey, Israel, Thailand, Qatar, Kenya, Uzbekistan, Mozambique, Morocco, Yemen, Hungary, Saudi Arabia, Nigeria, and Bahrain, based on domains registered by NSO.

NSO Group, the architect of Pegasus, claims to  provide “authorized governments with technology that helps them combat terror and crime,” insisting that its products are only used in lawful ways., NSO spokesperson Zamir Dahbash told reporters that the company “fully complies with strict export control laws and regulations.” The Citizen Lab researcher who disassembled the malicious program, however, compared it to “defusing a bomb.” All of which raises the question – what laws or regulations govern the export of cyber-weapons by an Israeli firm (likely controlled by U.S. investors) to foreign governments around the world?

Cyber weapons are becoming increasingly interchangeable with traditional weapons. Governments (or terrorists) no longer need bombs or missiles to inflict large-scale destruction, such as taking down a power grid, since such attacks can now be conducted from anywhere there is a computer. Do export controls – which have long been used as foreign policy and national security tools, and which would regulate the transfer of traditional weapons – play any real role in regulating the transfer of weapons of cyber-surveillance or destruction? In fact, the legal framework underlying current export controls has not caught up (and maybe never will) to the capabilities of technological tools used in cyberwarfare. Proposals to regulate malware have been met with resistance from the technology industry because malware technology is often dual-use and the practical implications of requiring licenses would impede technological innovation and business activities in drastic ways.

The Wassenaar Arrangement

The Wassenaar Arrangement (WA) was established in 1996 as a multilateral nonproliferation regime to promote regional security and stability through greater transparency and responsibility in the transfer of arms and sensitive technologies. The United States is a member. Israel is not, but has aligned its export controls with Wassennaar lists.

In December 2013, the list of export controlled technologies under WA was amended to include commercial surveillance software, largely to curb human rights abuses by repressive governments’ use of spyware on citizens. Earlier this year, the Department of Commerce issued recommendations that the definition of “intrusion software” in the WA be modified to encompass the concept of “authorization” so that malware such as Pegasus, in which the user does not truly understand the nature of the consequences, would be controlled. Those proposals have not been implemented.

U.S. Export Controls of Malware

In 2015, following data breaches at the Officer of Personnel Management and several private companies, the Department of Commerce published proposed rules to harmonize concepts embedded in the WA into the U.S. regulatory framework for export controls. One critical proposal was a definition of “intrusion software” to require a license for the export and use of malware tools. But the definition covered much more than malware. Cybersecurity experts were alarmed by the rule’s over-inclusive and vague language. The rules would have impeded critical business activities, stifled international research and cross-border exchanges of technology, and hindered response to cyber threats.

NSO Group has been described by researchers as “incredibly committed to stealth, and  reportedly has close partnerships with other Israeli surveillance firms that seek to sell spyware, suggesting an inevitable increase in cyber mayhem. As malware becomes more sophisticated, widespread, and threatening, the need for strictly tailored export controls is not going to go away.

Regulating software is challenging at least in part, because there is no workable legal definition of what constitutes a cyber weapon. Because malware is largely dual-use, the only way to determine whether particular software constitutes a cyber weapon is retroactively. If software has been used as a weapon, it is considered a cyber weapon. But that definition arrives far too late to control the dissemination of the code. Moreover, controlling  components of that software would likely be over-inclusive, since the same code that can exploit flaws to break in to devices can also have benign uses, such as detecting vulnerabilities to help manufacturers like Apple learn what needs patching. Another challenge is that requiring  export licenses can take months, which, in the fast-moving tech world is as good as denial.

The revelation of the Pegasus iPhone spyware highlights questions that have perplexed national security and export control experts in recent years. As the use and sophistication of malware continue their explosive growth, not only must individuals and governments face the  chilling realities of cyber warfare, but regulators must quickly understand the technological issues, address the risks, and work with the cyber security and technological communities to find a path forward.

Location Data Gathering Under Europe’s New Privacy Laws

Why are EU regulators particularly concerned about location data?

Location-specific data can reveal very specific and intimate details about a person, where they go, what establishments they frequent and what their habits or routines are. Some location-specific data garners heightened protections, such as where and how often a person obtains medical care or where a person attends religious services.

In the U.S., consumers typically agree to generalized privacy policies by clicking a box prior to purchase, download or use of a new product or service. But the new EU regulations may require more informed notice and consent be obtained for each individual use of the data that a company acquires. For example, a traffic app may collect location data to offer geographically-focused traffic reports and then also use that data to better target advertisements to the consumer, a so-called “secondary use” of the data.

The secondary use is what is concerning to EU regulators. They want to give citizens back control over their personal data, which means meaningfully and fully informing them of how and when it is used. For example, personal data can only be gathered for legitimate purposes, meaning companies should not continue to collect location data beyond what is necessary to support the functionality of their business model; also additional consent would need to be obtained each time the company wants to re-purpose or re-analyze the data they have collected. This puts an affirmative obligation on companies to know if, when and how their partners are using consumer data and to make sure such use has been consented to by the consumer.

What should a company do that collects location data in the EU? 

  1. Consumers should be clearly informed about what location information is being gathered and how it will be used, this does not just mean the primary use of the data, but any ancillary uses such as to target advertisements, etc.;

  2. Consumers should be given the opportunity to decline to have their data collected, or to be able to “opt-out” of any of the primary or secondary uses of their data;

  3. Companies need to put a mechanism in place to make consumers aware if the company’s data collection policies change, for example, a company may not have a secondary use for the data now, but in 2 years it plans on packaging and reselling that data to an aggregator; and

  4. Companies must have agreements in place with their partners in the “business ecosystem” to ensure their partners are adhering to the data collection permissions that the company has obtained.

© Polsinelli PC, Polsinelli LLP in California