Small and Mid-Sized Businesses Continue to Be Targeted by Cybercriminals

A recent Ponemon Institute study finds that small and mid-sized businesses continue to be targeted by cybercriminals, and are struggling to direct an appropriate amount of resources to combat the attacks.

The Ponemon study finds that 76 percent of the 592 companies surveyed had experienced a cyber-attack in the previous year, up from 70 percent last year. Phishing and social engineering attacks and scams were the most common form of attack reported by 57 percent of the companies,  while 44 percent of those surveyed said the attack came through a malicious website that a user accessed. I attended a meeting of Chief Information Security Officers this week and was shocked at one statistic that was discussed—that a large company filters 97 percent of the email that is directed at its employees every day. That means that only 3 percent of all email that is addressed to users in a company is legitimate business.

A recent Accenture report shows that 43 percent of all cyber-attacks are aimed at small businesses, but only 14 percent of them are prepared to respond. Business insurance company Hiscox estimates that the average cost of a cyber-attack for small companies is $200,000, and that 60 percent of those companies go out of business within six months of the attack.

These statistics confirm what we all know: cyber-attackers are targeting the lowest hanging fruit—small to mid-sized businesses, and municipalities and other governmental entities that are known to have limited resources to invest in cybersecurity defensive tools. Small and mid-sized businesses that cannot devote sufficient resources to protecting their systems and data may wish to consider other ways to limit risk, including prohibiting employees from accessing websites or emails for personal reasons during working hours. This may sound Draconian, but employees are putting companies at risk by surfing the web while at work and clicking on malicious emails that promise free merchandise. Stopping risky digital behavior is no different than prohibiting other forms of risky behavior in the working environment—we’ve just never thought of it this way before.

Up to this point, employers have allowed employees to access their personal phones, emails and websites during working hours. This has contributed to the crisis we now face, with companies often being attacked as a result of their employees’ behavior. No matter how much money is devoted to securing the perimeter, firewalls, spam filters or black listing, employees still cause a large majority of security incidents or breaches because they click on malicious websites or are duped into clicking on a malicious email. We have to figure out how employees can do their jobs while also protecting their employers.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more on cybersecurity, see the National Law Review Communications, Media & Internet law page.

Is Your Iphone Spying on you (Again)?

In the latest installment of this seemingly ongoing tale, Google uncovered (for the second time in a month) security flaws in Apple’s iOS, which put thousands of users at risk of inadvertently installing spyware on their iPhones. For two years.

Google’s team of hackers – working on Project Zero – say the cyberattack occurred when Apple users visited a seemingly genuine webpage, with the spyware then installing itself on their phones. It was capable of then sending the user’s texts, emails, photos, real-time location,  contacts, account details (you get the picture) almost instantaneously back to the perpetrators of the hack (which some reports suggest was a nation state). The hack wasn’t limited to Apple apps either, with reports the malware was able to extract data from WhatsApp, GoogleMaps and Gmail.

For us, the scare factor goes beyond data from our smart devices inadvertently revealing secret locations, or being used against us in court – the data and information the cyberspies could have had access to could wreak absolute havoc on the everyday iPhone users’ (and, the people whose details they have in their phones) lives.

We’re talking about this in past tense because while it was only discovered by Project Zero recently, Apple reportedly fixed the vulnerability without much ado in February this year, by releasing a software update.

So how do you protect yourself from being spied on? It seems there’s no sure-fire way to entirely prevent yourself from becoming a victim, or, if you were a victim of this particular attack, to mitigate the damage. But, according to Apple,  “keeping your software up to date is one of the most important things you can do to maintain your Apple product’s security”. We might not be ignoring those pesky “a new update is available for your phone” messages, anymore.


Copyright 2019 K & L Gates

ARTICLE BY Cameron Abbott and Allison Wallace of K&L Gates.
For more on device cyber-vulnerability, see the National Law Review Communications, Media & Internet law page.

Facebook “Tagged” in Certified Facial Scanning Class Action

Recently, the Ninth Circuit Court of Appeals held that an Illinois class of Facebook users can pursue a class action lawsuit arising out of Facebook’s use of facial scanning technology. A three-judge panel in Nimesh Patel, et al v. Facebook, Inc., Case No. 18-15982 issued an unanimous ruling that the mere collection of an individual’s biometric data was a sufficient actual or threatened injury under the Illinois Biometric Information Privacy Act (“BIPA”) to establish standing to sue in federal court. The Court affirmed the district court’s decision certifying a class. This creates a significant financial risk to Facebook, because the BIPA provides for statutory damages of $1,000-$5,000 for each time Facebook’s use of facial scanning technology was used in the State of Illinois.

This case is important for several reasons. First, the decision recognizes that the mere collection of biometric information may be actionable, because it creates harm to an individual’s privacy. Second, the decision highlights the possible extraterritorial application of state data privacy laws, even those that have been passed by state legislatures intending to protect only their own residents. Third, the decision lays the groundwork for a potential circuit split on what constitutes a “sufficiently concrete injury” to convey standing under the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016). Fourth, due to the Illinois courts’ liberal construction and interpretation of the statute, class actions in this sphere are likely to continue to increase.

The Illinois class is challenging Facebook’s “Tag Suggestions” program, which scans for and identifies people in uploaded photographs for photo tagging. The class plaintiffs alleged that Facebook collected and stored biometric data without prior notice or consent, and without a data retention schedule that complies with BIPA. Passed in 2008, Illinois’ BIPA prohibits gathering the “scan of hand or face geometry” without users’ permission.

The district court previously denied Facebook’s numerous motions to dismiss the BIPA action on both procedural and substantive grounds and certified the class. In moving to decertify the class, Facebook argued that any BIPA violations were merely procedural and did not amount to “an injury of a concrete interest” as required by the U.S. Supreme Court’s landmark 2016 decision in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016).

In its ruling, the Ninth Circuit determined that Facebook’s use of facial recognition technology without users’ consent “invades an individual’s private affairs and concrete interests.” According to the Court, such privacy concerns were a sufficient injury-in-fact to establish standing, because “Facebook’s alleged collection, use, and storage of plaintiffs’ face templates here is the very substantive harm targeted by BIPA.” The Court cited with approval Rosenbach v. Six Flags Entertainment Corp., — N.E.3d —, 2019 IL 123186 (Ill. 2019), a recent Illinois Supreme Court decision similarly finding that individuals can sue under BIPA even if they suffered no damage beyond mere violation of the statute. The Ninth Circuit also suggested that “[s]imilar conduct is actionable at common law.”

On the issue of class certification, the Ninth Circuit’s decision creates a precedent for extraterritorial application of the BIPA. Facebook unsuccessfully argued that (1) the BIPA did not apply because Facebook’s collection of biometric data occurred on servers located outside of Illinois, and (2) even if BIPA could apply, individual trials must be conducted to determine whether users uploaded photos in Illinois. The Ninth Circuit rejected both arguments. The Court determined that (1) the BIPA applied if users uploaded photos or had their faces scanned in Illinois, and (2) jurisdiction could be decided on a class-wide basis. Given the cross-border nature of data use, the Court’s reasoning could be influential in future cases where a company challenges the applicability of data breach or data privacy laws that have been passed by state legislatures intending to protect their own residents.

The Ninth Circuit’s decision also lays the groundwork for a potential circuit split. In two cases from December 2018 and January 2019, a federal judge in the Northern District of Illinois reached a different conclusion than the Ninth Circuit on the issue of BIPA standing. In both cases, the Northern District of Illinois ruled that retaining an individual’s private information is not a sufficiently concrete injury to satisfy Article III standing under Spokeo. One of these cases, which concerned Google’s free Google Photos service that collects and stores face-geometry scans of uploaded photos, is currently on appeal to the Seventh Circuit.

The Ninth Circuit’s decision paves the way for a class action trial against Facebook. The case was previously only weeks away from trial when the Ninth Circuit accepted Facebook’s Rule 23(f) appeal, so the litigation is expected to return to the district court’s trial calendar soon. If Facebook is found to have violated the Illinois statute, it could be held liable for substantial damages – as much as $1000 for every “negligent” violation and $5000 for every “reckless or intentional” violation of BIPA.

BIPA class action litigation has become increasingly popular since the Illinois Legislature enacted it: over 300 putative class actions asserting BIPA violations have been filed since 2015. Illinois’ BIPA has also opened the door to other recent state legislation regulating the collection and use of biometric information. Two other states, Texas and Washington, already have specific biometric identifier privacy laws in place, although enforcement of those laws is accomplished by the state Attorney General, not private individuals. A similar California law is set to go into effect in 2020. Legislation similar to Illinois’ BIPA is also currently pending in several other states.

The Facebook case will continue to be closely watched, both in terms of the standing ruling as well as the potential extended reach of the Illinois law.


© Polsinelli PC, Polsinelli LLP in California

For more in biometric data privacy, see the National Law Review Communications, Media & Internet law page.

DOJ Gets Involved in Antitrust Case Against Symantec and Others Over Malware Testing Standards

The U.S. Department of Justice Antitrust Division has inserted itself into a case that questions whether the Anti-Malware Testing Standards Organization, Inc. (AMTSO) and some of its members are creating standards in a manner that violates antitrust laws.

AMTSO says it is exempt from such per se claims by the Standards Development Organization Act of 2004 (SDOA). Symantec Corp., an AMTSO member, says the more flexible “rule of reason” applies – that it must be proven that standards actually undermine competition, which the recommended guidelines do not.

Malware BugNSS Labs, Inc., is an Austin, Texas-based cybersecurity testing company which offers services including “data center intrusion prevention” and “threat detection analytics.”

In addition to Symantec, AMTSO members include widely recognized names like McAfee and Microsoft, as well as names known well in cybersecurity circles: CarbonBlack, CrowdStrike, FireEye, ICSA, and TrendMicro. NSS Labs also is a member, but says it is among a small number of testing service providers. The organization is dominated by product vendors who easily outvote the service providers like NSS, AV-Comparatives, AV-Test and SKD LABS, NSS maintains, claims disputed by the organization.

On Sept. 19, 2018, NSS Labs filed suit in U.S. District Court for the Northern District of California against AMTSO, CrowdStrike (since voluntarily dismissed), Symantec, and ESET, alleging the product companies used their power in AMTSO to control the design of the malware testing standards, “actively conspiring to prevent independent testing that uncovers product deficiencies to prevent consumers from finding out about them.” The industry standard requires a group boycott that restrains trade, NSS Labs argues, hurting service providers (NSS Labs v. CrowdStrike, et al., No. 5:18-cv-05711-BLF, N.D. Calif.).

The case is before U.S. District Judge Beth Labson Freeman in Palo Alto, who has presided over a number of high-profile matters.

AMTSO moved to dismiss NSS Labs’ suit, citing its exemption from per se antitrust claims because of its status as a standards development organization (SDO). Further, it argues that the group is open to anyone and, while there are three times more vendors than testing service providers in the organization, that reflects the market itself.

On June 26, the DOJ Antitrust Division asked the court not to dismiss the case because further evidence is needed to determine whether the exemption under the SDOAA is justified.

AMTSO countered that the primary reason the case should be dismissed has “nothing to do” with the SDOAA. NSS failed to allege that AMTSO participated in any boycott, the organization says. All the group has done is “adopt a voluntary standard and foster debate about its merits, which is not illegal at all, let alone per se illegal,” the group says, adding that the Antitrust Division is asking the court to “eviscerate the SDOAA.”

Symantec first responded to the suit with a public attack on NSS Labs itself, criticizing its methodology and lack of transparency in its testing procedures, as well as the company’s technical capability and it’s “pay to play” model in conducting public tests. NSS Labs’ leadership team includes a former principal engineer in the Office of the Chief Security Architect at Cisco, a former Hewlett-Packard professional who established and managed competitive intelligence network programs, and an information systems management professional who formerly held senior management positions at Deloitte, IBM and Aon Hewitt.

On July 8, Symantec responded to the Antitrust Division’s statement of interest. It argued that the SDOAA does not provide an exemption from antitrust laws. Instead, it offers “a legislative determination that the rule of reason – not the per se rule” to standard setting activities. “That simply means the plaintiff must prove actual harm to competition, rather than relying on an inflexible rule of law,” Symantec says.

The company wrote that the government may have a point, albeit a moot one. “Symantec does not believe so, but perhaps the Division is right that there is a factual question about whether AMTSO’s membership lacks the balance the statute requires for the exclusion from per se analysis to apply,” Symantec says. Either way, the company argues, it doesn’t matter to the motions for dismissal because the per se rule does not apply.

Judge Freeman has set deadlines for disclosures, discovery, expert designations, and Daubert motions, with a trial date of Feb. 7, 2022.

Commentary

The antitrust analysis of standards setting is one of the sharpest of two-edged swords: When it works properly, it reflects a technology-driven process of reaching an industry consensus that often brings commercialization and interoperability of new technologies to market. When it is undermined, however, it reflects concerted action among competitors that agree to exclude disfavored technologies in a way that looks very much like a group boycott, a per se violation of Section 1 of the Sherman Act.

Accordingly, the Standards Development Organization Advancement Act of 2004 (SDOAA) recognizes that, when they are functioning properly, exempting bone fide standards development organizations (SDOs) from liability for per se antitrust violations can promote the pro-competitive standard setting process. But, when do SDOs “function properly”? The answer is entirely procedural, and is embodied in the statutory definition of SDO: an organization that “incorporate[s] the attributes of openness, balance of interests, due process, an appeals process, and consensus … “

The essential claim in the complaint by NSS Labs, therefore, is that the rules and procedures followed by AMTSO do not provide sufficient procedural safeguards to ensure that the organization arrives at a pro-competitive industry consensus rather than a group boycott for the benefit of one or a few industry players dressed in the garb of standard setting.

This is a factual inquiry that cannot be countered by a legal defense that simply declares the defendant is an SDO and, therefore, immune to suit under the statute. Whether the AMTSO is an SDO under the law or not depends on how it conducts itself, the make-up of its members, and its fidelity to the procedural principles embodied in the statute. The plaintiff’s claim is that AMTSO has not followed the procedural principles required to qualify as an SDO under the Act. This is a purely factual issue and, as such, cannot be resolved on a motion to dismiss.

The DOJ should be commended for urging the court to proceed to discovery to adduce the necessary facts to distinguish between legitimate standard setting and an unlawful group boycott and it should continue to be vigilant in the face of SDOs and would-be SODs that might be tempted to use the wrong side of the standard setting sword to commit anticompetitive acts instead of the right side to produce welfare-enhancing industry consensus.

This is particularly true in vital industries like cybersecurity. Government agencies, businesses, and consumers are constantly and increasingly at risk from ever-evolving cyber threats. It is therefore imperative that the cybersecurity market remains competitive to ensure development of the most effective security products.


© MoginRubin LLP
This article was written by Jonathan Rubin and Timothy Z. LaComb of MoginRubin & edited by Tom Hagy for MoginRubin.
For more DOJ Antitrust activities, see the National Law Review Antitrust & Trade Regulation page.

Heavy Metal Murder Machines and the People Who Love Them

What is the heaviest computer you own?  Chances are, you are driving it.

And with all of the hacking news flying past us day after day, our imaginations have not even begun to grasp what could happen if a hostile person decided to hack our automotive computers – individually or en masse. What better way to attack the American way of life but disable and crash armies of cars, stranding them on the road, killing tens of thousands, shutting down functionality of every city? Set every Ford F-150 to accelerated to 80 miles an hour at the same time on the same day and don’t stick around to clean up the mess.

We learned the cyberwarfare could turn corporal with the US/Israeli STUXNET bug forcing Iran’s nuclear centrifuges to overwork and physically break themselves (along with a few stray Indian centrifuges caught in the crossfire). This seems like a classic solution for terror attacks – slip malicious code into machines that will actually kill people. Imagine if the World Trade Center attack was carried out from a distance by simply taking over the airplanes’ computer operations and programing them to fly into public buildings.  Spectacular mission achieved and no terrorist would be at risk.

This would be easy to do with automobiles. For example, buy a recent year used car on credit at most U.S. lots and the car comes with a remote operation tool that allows the lender to shut off the car, to keep it from starting up, and to home in on its location so the car can either be “bricked” or grabbed by agents of the lender due to non-payment. We know that a luxury car includes more than 100 million lines of code, where a Boeing 787 Dreamliner contains merely 6.5 million lines of code and a U.S. Airforce F-22 Raptor Jet holds only 1.7 million lines of code.  Such complexity leads to further vulnerability.

The diaphanous separation between the real and electronic worlds is thinning every day, and not enough people are concentrating on the problem of keeping enormous, powerful machines from being hijacked from afar. We are a society that loves its freedom machines, but that love may lead to our downfall.

An organization called Consumer Watchdog has issued a report subtly titled KILL SWITCH: WHY CONNECTED CARS CAN BE KILLING MACHINES AND HOW TO TURN THEM OFF, which urges auto manufacturers to install physical kill switches in cars and trucks that would allow the vehicles to be disconnected from the internet. The switch would cost about fifty cents and could prevent an apocalyptic loss of control for nearly every vehicle on the road at the same time. (The IoT definition of a bad day)

“Experts agree that connecting safety-critical components to the internet through a complex information and entertainment device is a security flaw. This design allows hackers to control a vehicle’s operations and take it over from across the internet. . . . By 2022, no less than two-thirds of new cars on American roads will have online connections to the cars’ safety-critical system, putting them at risk of deadly hacks.”

And if that isn’t frightening enough, the report continued,

“Millions of cars on the internet running the same software means a single exploit can affect millions of vehicles simultaneously. A hacker with only modest resources could launch a massive attack against our automotive infrastructure, potentially causing thousands of fatalities and disrupting our most critical form of transportation,”

If the government dictates seat belts and auto emissions standards, why on earth wouldn’t the Transportation Department require a certain level of security of connectivity and software invulnerability from the auto industry.  We send millions of multi-ton killing machines capable of blinding speeds out on our roads every day, and there seems to be no standard for securing the hackability of these machines.  Why not?

And why not require the 50 cent kill switch that can isolate each vehicle from the internet?

50 years ago, when Ralph Nader’s Unsafe at Any Speed demonstrated the need for government regulation of the auto industry so that car companies’ raw greed would not override customer safety concerns.  Soon after, Lee Iacocca led a Ford design team that calculated it was worth the horrific flaming deaths of 180 Ford customers each year in 2,100 vehicle explosions due to flawed gas tank design that was eventually fixed with a tool costing less than one dollar per car.

Granted that safety is a much more important issue for auto manufacturers now than in the 1970s, but if so, why have we not seen industry teams meeting to devise safety standards in auto electronics the same way standards have been accepted in auto mechanics? If the industry won’t take this standard-setting task seriously, then the government should force them to do so.

And the government should be providing help in this space anyway. Vehicle manufacturers have only a commercially reasonable amount of money to spend addressing this electronic safety problem.  The Russian and Iranian governments have a commercially unreasonable amount of money to spend attacking us. Who makes up the difference in this crital infrastructure space? Recognizing our current state of cyber warfare – hostile government sponsored hackers are already attacking our banking and power systems on a regular basis, not to mention attempting to manipulate our electorate – our government should be rushing in to bolster electronic and software security for the automotive and trucking sectors. Why doesn’t the TSB regulate the area and provide professional assistance to build better protections based on military grade standards?

Nothing in our daily lives is more dangerous than our vehicles out of control. Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million per year are injured or disabled. A terrorist or hostile government attack on the electronic infrastructure controlling our cars would easily multiply this number as well as shutting down the US roads, economy and health care system for all practical purposes.

We are not addressing the issue now with nearly the seriousness that it demands.

How many true car–mageddons will need to occur before we all take electric security seriously?


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

This article was written by Theodore F. Claypoole of Womble Bond Dickinson (US) LLP.
For more on vehicle security, please see the National Law Review Consumer Protection law page.

You Can be Anonymised But You Can’t Hide

If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.

Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.

But what happens when that anonymised data is combined with other data sets?

Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.

Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.

What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.

To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.

It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.

If you would like to know more about other similar studies, check out our previous blog post ‘The Co-Existence of Open Data and Privacy in a Digital World’.

Copyright 2019 K & L Gates
This article is by Cameron Abbott of  K&L Gates.
For more on internet privacy, see the National Law Review Communications, Media & Internet law page.

Louisiana Governor Declares Statewide Emergency After Cyber-Attacks Against School Systems

Louisiana Governor John Bel Edwards, for the first time in history, declared a statewide cybersecurity emergency last week, following cyber-attacks against several school systems in the state.

By declaring a cybersecurity emergency, the state is able to garner needed resources, including cybersecurity experts from the Louisiana National Guard, State Police, the Office of Technology Services, the Governor’s Office of Homeland Security and Emergency Preparedness, Louisiana State University, and others to assist school systems in Sabine, Morehouse and Oachita parishes that were compromised with malware attacks.

According to the Governor’s office, although these resources are working on the incident, the threat is ongoing. The Governor established a statewide Cyber Security Commission in 2017 and stated that these incidents against school systems in the State are the reason the Commission was established.

Several states, but not all, have established Cyber Security Commissions or similar public-private partnerships in order to prepare for and respond to cyber-attacks that affect state resources. Setting up the Commission in advance of attacks like the ones that occurred in Louisiana will assist states in responding quickly to these attacks and provide appropriate resources and help to those affected.

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This article is by Linn F. Freedman of Robinson & Cole LLP.
For more in cybersecurity issues, please see the Communications, Media & Internet law page on the National Law Review.

DNA Information of Thousands of Individuals Exposed Online for Years

It is being reported that Vitagene, a company that provides DNA testing to provide customers with specific wellness plans through personalized diet and exercise plans based on their biological traits, left more than 3,000 user files publicly accessible on Amazon Web Services servers that were not configured properly.

The information that was involved included customers’ names, dates of birth and genetic information (such as the likelihood of developing medical conditions), as well as contact information and work email addresses. Almost 300 files contained raw genotype DNA that was accessible to the public.

Vitagene has been providing services since 2014 and the records exposed dated between 2015 and 2017. Vitagene was notified of the accessibility of the information on July 1, 2019, and fixed the vulnerability.

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This article was written by Linn F. Freedman of  Robinson & Cole LLP.

Bombas Settles with NYAG Over Credit Card Data Breach

Modern sock maker, Bombas, recently settled with New York over a credit card breach, agreeing to pay $65,000 in penalties.  According to the NYAG, malicious code was injected into Bombas’ Magento ecommerce platform in 2014.  The company addressed the issue over the course of 2014 and early 2015, and according to the NYAG, determined that bad actors had accessed customer information (names, addresses and credit card numbers) of almost 40,000 people. While the company notified the payment card companies at the time, it concluded that it did not need to notify impacted individuals because the payment card companies “did not require a formal PFI or otherwise pursue the matter beyond basic questions.”

In 2018, Bombas updated its cyber program, causing it to “revisit” the incident, deciding to notify impacted individuals and attorneys general. The NYAG concluded that the company had delayed in providing notice in violation of New York breach notification law, which requires notification “in the most expedient time necessary.” In addition to the $65,000 penalty, the company has agreed to modify how it might handle potential future breaches. This includes conducting prompt and thorough investigations, as well as training for employees on how to handle potential data breach matters.

Putting it into PracticeThis settlement is a reminder to companies to ensure that they have appropriate measures in place to investigate potential breaches, and understand their notification obligations.

 

Copyright © 2019, Sheppard Mullin Richter & Hampton LLP.
For more on financial breaches, please see the Financial Institutions & Banking page on the National Law Review.

The California Consumer Privacy Act Series Part 1: Applicability

California’s new privacy law, the California Consumer Privacy Act (the “CCPA”), goes into effect on January 1, 2020.  It is the most expansive state privacy law in U.S. history, imposing GDPR-like transparency and individual rights requirements on companies.  The law will impact nearly every entity that handles “personal information” regarding California residents, including (at least for now) employees.  An overview of the CCPA’s applicability is set forth below.

Who will the CCPA impact?

Most of the CCPA’s obligations apply directly to a “business,” which is an entity that:

  1. Handles “personal information” about California residents;
  2. Determines the purposes and means of processing that “personal information”; and
  3. Does business in California, and meets one of the following threshold requirements:

(a) Has annual gross revenues in excess of $25 million;

(b) Annually handles “personal information” regarding at least 50,000 consumers, households, or devices; or

(c) Derives 50% or more of its annual revenue from selling “personal information.”

However, “service providers” that handle “personal information” on behalf of a business and other third parties that receive “personal information” will also be impacted.  As currently written, however, the CCPA does not apply to non-profit organizations.

The CCPA’s three threshold requirements seem relatively straightforward, yet upon examination raise additional questions that will need to be clarified down the road.  For example:

  • Does the 50,000 devices threshold cover devices of California residents only, or apply more broadly?
  • Is the $25 million annual revenue trigger applicable only to revenue derived from California or globally?
  • What timeframe do businesses who suddenly find themselves within the CCPA’s ambit have to bring themselves into compliance with its provisions?

What is “personal information” as defined in the CCPA?

The CCPA defines “personal information” broadly in terms of (a) types of individuals and (b) types of data elements.  First, the term “consumer” refers to, and the CCPA applies to data about, any California resident, which ostensibly includes website visitors, B2B contacts and (at least for now) employees.  It is not limited to B2C customers that actually purchase goods or services.  Second, the data elements that constitute “personal information” term include non-sensitive items that historically have been less regulated in the U.S., such as Internet browsing histories, IP addresses, product preferences, purchasing histories, and inferences drawn from any other types of personal information described in the statute, including:

  • Identifiers such as name, address, phone number, email address;
  • Characteristics of protected classifications under California and federal law;
  • Commercial information such as property records, products purchased, and other consuming history;
  • Biometric information;
  • Internet or other electronic network activity;
  • Geolocation data;
  • Olfactory, audio, and visual information; and
  • Professional or educational information.

Does the CCPA have any exemptions?

The CCPA will apply to a broad number of businesses, covering nearly all commercial entities that do business in California, regardless of whether the business has a physical location or employees in the State.  However, there are some nuanced exemptions.

As a general matter, the exemptions are based on the types of information that a business collects, and not on the industry of the business collecting the information.  These include information that is collected and used wholly outside of California, subject to other state and federal laws, or sold to or from consumer reporting agencies.  Specifically, the excluded categories of “personal information” include:

      1. Activity “wholly outside” California

The CCPA does not apply to conduct that takes place “wholly outside” of California, although it is unclear how such an exemption will apply in practice.  The statute provides that this exemption applies if:

  • The business collects information while the consumer is outside of California;
  • No part of the sale of the consumer’s “personal information” occurs in California; and
  • No “personal information” collected while the consumer is in California is sold.

Determining when a consumer is outside of California when his or her “personal information” is collected will be challenging for businesses.  For example, given that an IP address is expressly included as “personal information” under the law, is a business supposed to do a reverse-lookup to determine whether an individual’s IP address originates in California?

      1. Data subject to other U.S. laws

While the CCPA exempts certain types of information subject to other laws, importantly it does not exempt entities subject to those laws altogether.  Entities subject to these laws are also not exempt from the CCPA’s statutory damages (i.e., no injury necessary) provisions relating to data breaches.  Likewise, some types of information (clarified below) are not exempt from the data breach liability provision.  At a glance, these exemptions appear helpful; however, they may end up making operationalizing the law even more difficult for certain entities.  For example:

  • Protected Health Information (“PHI”) and “Medical Information.” The CCPA exempts all PHI collected by “covered entities” and “business associates” subject to HIPAA and “medical information” subject to California’s analogous law, the Confidentiality of Medical Information Act (“CMIA”).  It also exempts any patient information to the extent a “covered entity” or “provider of health care,” respectively, maintains the patient information in the same manner as PHI or “medical information.”  However, many of these entities and their “business associates” collect information beyond what is considered PHI, such as employment records, technical data about website visitors, B2B information, and types of research data.  This data may not be eligible for the CCPA exemption.
  • Clinical Trial Information. The CCPA exempts information collected as part of a clinical trial subject to the Federal Policy for the Protection of Human Subjects, also known as the Common Rule.
  • Financial Information. Information processed pursuant to the Gramm-Leach-Bliley Act (“GLBA”) or the California Financial Information Privacy Act (“CalFIPA”) is exempt from the CCPA.  Much like the health-related exemption, this rule does not exempt entities subject to these laws altogether from its requirements to the extent an entity is processing information not expressly subject to GLBA/CalFIPA.  This particular exemption does not apply to the data breach liability provision.
  • Consumer Reporting Information. The CCPA exempts information sold to and from consumer reporting agencies if that information is reported in, or used to generate, a consumer report and use of that information is limited by the Fair Credit Reporting Act.
  • Driver Information. The CCPA also exempts information processed pursuant to the Driver’s Privacy Protection Act of 1994 (“DPPA”).  Importantly, entities subject to this law are not altogether exempt and this exemption does not apply to the data breach liability provision.

Moreover, the differences in definitions of relevant terms (e.g., “personal information” under the CCPA versus “nonpublic personal information” under GLBA) are important to consider when assessing relevant obligations and could result in institutions being only partially exempt from CCPA compliance.

 

© Copyright 2019 Squire Patton Boggs (US) LLP
This post was written by India K. Scarver and Elliot Golding    of Squire Patton Boggs.