EU Official Calls for Invalidation of EU–U.S. Safe Harbor Pact

A European Court of Justice (ECJ) advocate general, Yves Bot, has called for the European Union–U.S. Safe Harbor Agreement to be invalidated due to concerns over U.S. surveillance practices (press release here, opinion here). The ECJ has discretion to reject the recommendation, but such opinions are generally followed. A final decision on the issue is expected to be issued late this year or next year.

The issue arises out of the claims of an Austrian law student, Max Schrems, who challenged Facebook’s compliance with EU data privacy laws. (The case is Schrems v. (Irish) Data Protection Commissioner, ECJ C-362/14.) He claims that the Safe Harbor Framework fails to guarantee “adequate” protection of EU citizen data in light of the U.S. National Security Agency’s (NSA) surveillance activities. Although the Irish data protection authority rejected his claim, he appealed and the case was referred to the ECJ.

The European Data Protection Directive prohibits data of EU citizens from being transferred to third countries unless the privacy protections of the third countries are deemed adequate to protect EU citizens’ data. The U.S. and EU signed the Safe Harbor Framework in 2000, which permits companies self-certify to the U.S. Department of Commerce (DOC) annually that they abide by certain privacy principles when transferring data outside the EU. Companies must agree to provide clear data privacy and collection notices and offer opt-out mechanisms for EU consumers.

In 2013, former NSA contractor Edward Snowden began revealing large-scale interception and collection of data about U.S. and foreign citizens from companies and government sources around the globe. The revelations, which continue, have alarmed officials around the world, and already prompted the European Commission to urge more stringent oversight of data security mechanisms. The European Parliament voted in March 2014 to withdraw recognition from the Safe Harbor Framework. Apparently in response to the concern, the Federal Trade Commission (FTC) has taken action against over two dozen companies for failing to maintain Safe Harbor certifications while advertising compliance with the Framework, and in some cases claiming compliance without ever certifying in the first place. For more, see here (FTC urged to investigate companies), here (FTC settles with 13 companies in August 2015), and here (FTC settles with 14 companies in July 2014).

Advocate General Bot does not appear to have been mollified by the U.S. efforts, however. He determined that “the law and practice of the United States allow the large-scale collection of the personal data of citizens of the [EU,] which is transferred under the [S]afe [H]arbor scheme, without those citizens benefiting from effective judicial protection.” He concluded that this amounted to interference in violation of the right to privacy guaranteed under EU law, and that, notwithstanding the European Commission’s approval of the Safe Harbor Framework, EU member states have the authority to take measures to suspend data transfers between their countries and the U.S.

While the legal basis of that opinion may be questioned, and larger political realities regarding the ability to negotiate agreements between the EU and the U.S. are at play, if followed by the ECJ, this opinion would make it extremely difficult for companies to offer websites and services in the EU. This holds true even for many EU companies, including those that may have cloud infrastructures that store or process data in U.S. data centers. It could prompt a new round of negotiations by the U.S. and European Commission to address increased concerns in the EU about surveillance.

Congressional action already underway may help release some tension, with the House Judiciary Committee unanimously approving legislation that would give EU consumers a judicial right of action in the U.S. for violations of their privacy. This legislation was a key requirement of the EU in an agreement in principle that would allow the EU and U.S. to exchange data between law enforcement agencies during criminal and terrorism investigations.

Although the specific outcome of this case will not be known for months, the implications for many businesses are clear: confusion and continued change in the realms of privacy and data security, and uncertainty about the legal rules of the game. Increased fragmentation across the EU may result, with a concomitant need to keep abreast of varying requirements in more countries. Change and lack of harmonization is surely the new normal now.

© 2015 Keller and Heckman LLP

Wearables, Wellness and Privacy

Bloomberg BNA recently reported that this fall the Center for Democracy & Technology (CDT) will be issuing a report on Fitbit Inc.’s privacy practices. Avid runners, walkers or those up on the latest gadgets likely know about Fitbit, and its line of wearable fitness devices. Others may know about Fitbit due to the need to measure progress in their employers’ wellness programs, or even whether they qualify for an incentive. When participating in those programs, employees frequently raise questions about the privacy and security of data collected under such programs, a compliance issue for employers. Earlier this month, FitBit reported that its wellness platform is HIPAA compliant.

fitbit, charge HR, wearable technology, fitness tech, exercise, step counter, weight loss deviceFitBit’s Charge HR (the one I use) tracks some interesting data in addition to the number of steps: heart rate, calories burned, sleep activity, and caller ID. This and other data can be synched with a mobile app allowing users to, among other things: create a profile with more information about themselves, to track progress daily and weekly, and to find and communicate with friends also using a similar device.

Pretty cool stuff, and reasons why FitBit is the most popular manufacturer of wearables with nearly 25 percent of the market, as noted by Bloomberg BNA. But, of course, FitBit is not the only player in the market, and the same issues have to considered with the use of wearables regardless of the manufacturer.

According to Bloomberg BNA’s article, one of the concerns raised by CDT about FitBit and other wearables is that the consumer data collected by the devices may not be protected by federal health privacy laws. However, CDT’s deputy director of the Consumer Privacy Project stated to Bloomberg BNA that she has “a real sense that privacy matters” to FitBit. This is a good sign, but the laws that apply to the use of these kinds of devices depend on how they are used.

When it comes to employer-sponsored wellness programs and health plans, a range of laws may apply raising questions about what data can be collected, how it can be used and disclosed, and what security safeguards should be in place. At the federal level, the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and the Genetic Information Nondiscrimination Act (GINA) should be on every employer’s list. State laws, such as California’s Confidentiality of Medical Information Act, also have to be taken into account when using these devices in an employment context.

Recently issued EEOC proposed regulations concerning wellness programs and the ADA address medical information confidentiality. If finalized in their current form, among other safeguards, the regulations would require employers to provide a notice informing employee about:

  • what medical information will be obtained,

  • who will receive the medical information,

  • how the medical information will be used,

  • the restrictions on its disclosure, and

  • the methods that will be used to prevent improper disclosure.

Preparing these notices for programs using wearables will require knowing more about the capabilities of the devices and how data is accessed, managed, disclosed and safeguarded.

But is all information collected from a wearable “medical information”? Probably not. The number of steps a person takes on a given day, in and of itself, seems unlikely to be medical information. However, data such as heart rate and other biometrics might be considered medical information subject to the confidentiality rule. Big data analytics and IoT may begin to play a greater role here, enabling more detailed pictures to be developed about employees and their activities and health through the many devices they use.

Increasingly wellness programs seek to incentivize the household, or at least employees and their spouses. Collecting data from wearables of both employee and spouse may raise issues under GINA which prohibits employers from providing incentives to obtain genetic information from employees. Genetic information includes the manifestation of disease in family members (yes, spouses are considered family members under GINA). The EEOC is currently working on proposed regulations under GINA that we are hoping will provide helpful insight into this and other issues related to GINA.

HIPAA too may apply to wearables and their collection of health-related data when related to the operation of a group health plan. Employers will need to consider the implications of this popular set of privacy and security standards including whether (i) changes are needed in the plan’s Notice of Privacy Practices, (ii) business associate agreements are needed with certain vendors, and (iii) the plan’s risk assessment and policies and procedures adequately address the security of PHI in connection with these devices.

Working through plans for the design and implementation of a typical wellness program certainly must involve privacy and security; moreso for programs that incorporate wearables. FitBits and other devices likely raise employees’ interest and desire to get involved, and can ease administration of the program, such as in regard to tracking achievement of program goals. But they raise additional privacy and security issues in an area where the law continues to develop. So, employers need to consider this carefully with their vendors and counselors, and keep a watchful eye for more regulation likely to be coming.

Until then, I need to get a few more steps in…

Article By Joseph J. Lazzarotti of Jackson Lewis P.C.

Nothing to See in This Story about the Electronic Communications Privacy Act

Check out this story.  In it, we learn this:electronic privacy act

Andrew Ceresney, director of the Division of Enforcement at the Securities and Exchange Commission, [told] the Senate’s Committee on the Judiciary at a hearing on Wednesday morning that the pending Electronic Communications Privacy Act Amendments Act would impede the ability of the SEC and other civil law enforcement agencies to investigate and uncover financial fraud and other unlawful conduct. Ceresney testified that the bill, intended to modernize portions of the Electronic Communications Privacy Act which became law in 1986, would frustrate the SEC’s efforts to gather evidence, including communications such as emails, directly from an Internet services provider.

So.  Let’s talk about what’s really at issue here.  We’re not talking about emails collected from companies with their own domain names and servers.  If a company maintains its own emails for its own purposes, the company is not a “provider of electronic communication service” under the ECPA and those emails are subject to SEC subpoenas just like its other documents.

But take, say, Google and Yahoo, among many others.  They are providers of electronic communication services.  Here’s what 18 U.S.C. § 2703(a) says about them:

A governmental entity may require the disclosure by a provider of electronic communication service of the contents of a wire or electronic communication, that is in electronic storage in an electronic communications system for one hundred and eighty days or less, only pursuant to a warrant issued using the procedures described in the Federal Rules of Criminal Procedure (or, in the case of a State court, issued using State warrant procedures) by a court of competent jurisdiction. A governmental entity may require the disclosure by a provider of electronic communications services of the contents of a wire or electronic communication that has been in electronic storage in an electronic communications system for more than one hundred and eighty days by the means available under subsection (b) of this section.

In plainer English, the SEC may require Google to disclose the contents of its customer’s emails if the emails have been in storage for 181 days.  For newer emails, the government must have a search warrant, which the SEC can’t get as a civil enforcement authority.

For the SEC, the ECPA typically comes up when it is investigating people who are not using corporate email addresses.  For example, Ponzi schemes and prime bank frauds are often going to be run on hotmail.com, not citigroup.com.  The problem for the SEC is, people running Ponzi schemes tend to have few issues with deleting incriminating emails.  And Google isn’t obligated to keep those deleted emails for any particular time period.  So if some guy defrauds a bunch of people and then quickly deletes the emails explaining how the fraud happened, there’s not a lot the SEC can do about it.  So it is very, very rare when the SEC is successful in using the ECPA to get emails from “providers of electronic communication service.”  And so . . . when Andrew Ceresney tells the Senate Judiciary Committee that amendments to the ECPA could impede civil law enforcement’s ability to uncover financial fraud and other unlawful conduct, he’s sort of right.  I might make the same argument if I were in his shoes.  But he’s also saying something that is almost inconsequential.  If the ECPA is not amended, the SEC will have a very hard time getting a hold of useful gmails.  If the ECPA is amended, it will have a very hard time getting a hold of useful gmails.  Just about every other issue in data privacy and securities enforcement is more significant than this one.

Copyright © 2015, Brooks, Pierce, McLendon, Humphrey & Leonard LLP

Part II: Legal Insights on Ashley Madison Hack

As more names emerge from the dark web data dump of Ashley Madison customers, lawyers around the globe have found a very willing group of would-be plaintiffs. Interestingly, all of these plaintiffs are named “Doe,” which must only be a coincidence, and certainly has nothing to do with the backlash that certain well-known ALM clients have experienced. All kidding aside, the size of the claims against ALM is staggering with one suit alleging more than $500 million in damages. How these plaintiffs will prove their damages is a question for another day, but the fact that ALM — which reported earnings of $115 million in 2014 — may soon face financial ruin must give any spectator pause.

The plaintiffs’ bar is certainly not the lone specter haunting ALM’s corridors these days. Although the company touts its cooperation with government officials in attempting to bring criminal charges against the Impact Team, that cooperation will be punctuated by the all-but-certain FTC enforcement action to come — assuming that the FTC’s data breach enforcement team were not among the 15,000 email addresses registered to a .mil or .gov account.

How will that enforcement action proceed? In many cases, the FTC initiates its investigation with a letter, sometimes called an “Access Letter” or an “Informal Inquiry Letter.” Although there is no enforceable authority behind such a letter, companies typically conclude that cooperation is the best course. For more formal investigations (or when the access letter is ignored), the FTC will issue “Civil Investigative Demands,” which are virtually the same as a subpoena, and are enforceable by court order. After collecting materials, the investigators will – in order from best case scenario to worst – drop the matter altogether, negotiate a consent decree, or begin a formal enforcement action via a complaint.

There is, of course, a lot more to an action than what I’ve listed above, which deserves a series of posts of their own. For today, the pressing question is – what’s going to happen to ALM when the FTC calls? Under the circumstances, it would make sense for ALM to push as hard as it can for a consent order, given that the likelihood of succeeding in litigation against the Commission is vanishingly low – there is little doubt that ALM failed to comply with its own promised standards for protecting customer data. And, in light of recent revelations about what really happened when customers paid to “delete” their Ashley Madison accounts, ALM will want to forestall the threat of a separate, non-data breach related unfair business practices suit any way it can.

Every consent order looks different, but the FTC has made a few requirements staples of its agreements with offending businesses over the last two decades. These include:

  • Establishing and maintaining a comprehensive information security program to protect consumers’ sensitive personal data, including credit card, social security, and bank account numbers.

  • Establishing and reporting on yearly data security protocol updates and continuing education for decision makers and data security personnel.

  • Working to improve the transparency of data, so that consumers can access their PII without excessive burdens.

  • Guaranteeing that all public statements and advertisements about the nature and extent of a company’s privacy and data security protocols are accurate.

 ALM will undoubtedly offer to take all of these steps, and more, in negotiations with the Commission. But as I mentioned above, the torrent of lawsuits ALM faces in the next year or so may moot any consent decree with the FTC. If ALM liquidates in the face of ruinous lawsuits and legal bills, the FTC’s demands will be meaningless. ALM, then, is likely an example of a company that would have benefited from a more minor security breach and subsequent FTC imposition of the kind of remedial measures that may have stopped this summer’s catastrophic data breach. An ounce of prevention is worth a pound of cure, they say, and ALM may learn that lesson at the cost of its business.

© 2015 Bilzin Sumberg Baena Price & Axelrod LLP

Legal Insights on the Ashley Madison Hack: Part I

Internet commenters and legal analysts alike are buzzing about the Ashley Madison hack. The website — which billed itself as a networking site for anyone who wanted to discretely arrange an extramarital affair — has already been named in several class action lawsuits, with claims ranging from breach of contract to negligence. As more names are unearthed (and more personal data divulged), additional lawsuits are sure to follow. For those lucky enough to be watching this spectacle from the sidelines, there are some important questions to ask. In the next few posts, I’ll consider some of these issues.

It seems clear that the Impact Team (the group responsible for breaking into Ashley Madison’s servers) were singularly focused on exposing embarrassing personal information as well as sensitive financial data. What is less clear is why they chose Ashley Madison’s parent company Avid Life Media (“ALM”) as the target. Certainly, the general public’s reaction to the data breach was muted if not downright amused, likely because the “victims” here were about as unsympathetic as they come. Still, the choice of Ashley Madison, and the way the hack was announced, demonstrates an important point about data security: self-described “hacktivists” may target secure information for reasons other than financial gain.

The Impact Team appears to be more motivated by shaming than any identifiable monetary benefit, although it is entirely possible that money was a factor. Interestingly, the intended damage from the leak was designed to flow in two directions. The first, and most obvious, was to Ashley Madison users, who clearly faced embarrassment and worse if their behavior were made public. The second direction was to ALM itself, for “fraud, deceit, and stupidity.” In particular, the Impact Team referred to ALM’s promises to customers that it would delete their data permanently, and keep their private information safe. Obviously, that didn’t happen. ALM made matters far worse for itself when it scrambled to provide a response to Impact Team’s threat, and made promises of security it could not keep. Now, in addition to a class action lawsuit alleging half a billion dollars in damages, ALM faces the wrath of a recently emboldened FTC.

One takeaway from this situation from a legal perspective is how ALM was targeted. Black hat groups often solicit suggestions for whom to attack, but typically in a secure fashion that would prevent early warning. LulzSec, responsible for the data breach at Sony Pictures in 2011, made a habit of seeking input as to what government entity or business to target, but kept those suggestions, and the contributors, secret. The Impact Team broke from that pattern, and announced before the breach, that they would release private information unless ALM shut down Ashley Madison and sister site “Established Men.” Other than a similar demand made to Sony Pictures Studios regarding the film The Interview, I can think of no other instances where hackers/hacktivists telegraphed that a cyber attack was coming.

Realizing this, a few questions immediately sprang to mind:

  • What do you do if your company gets a warning from a web group?
  • How many businesses have received such warnings and silently complied, just to avoid loss of sensitive information or damage to their reputation?
  • What happens to officers and directors who receive these warnings and do nothing? Is that a breach of fiduciary duties? Negligence? A civil conspiracy?

Ultimately, all of these questions merge into the two ongoing themes of data security: How do you protect critical information, and what do you do if you can’t?

In my upcoming articles I will get into the particulars of how some companies respond to cyberattacks, but for now, it makes sense to highlight the importance of planning ahead for your business. Even a basic cyber security protocol is better than a haphazard, post hoc response, and there are many resources that provide guidance about best practices. Longer-term planning requires expertise and commitment, but education can begin any time.

I’ll paraphrase Ashley Madison — Life is short: make a plan.

© 2015 Bilzin Sumberg Baena Price & Axelrod LLP

Reasonable Expectation of Privacy: Are You Free To Eavesdrop on Pocket Dials?

Most people have experienced a “pocket dial” – be it as the sender or receiver – and some have found themselves in embarrassing situations as a consequence.  But should people reasonably expect that conversations overhead during a “pocket dial” call are private and protected? Should the recipient feel obligated to end the call?  The Sixth Circuit says no.

Yesterday, the Sixth Circuit decided whether a reasonable expectation of privacy exists with respect to “pocket dialed” communications.  Carol Spaw, assistant to the CEO of Cincinnati/Northern Kentucky International Airport, received a call from James Huff, chairman of the airport board.  It didn’t take long for Spaw to figure out that she had received a pocket dial, and that the conversation in the background was not intended for her ears.  Spaw stayed on the line for an hour and a half – taking notes and recording the audio as Huff discussed private business matters with another board member, and later with his wife. Spaw sent the recording to a third party company to enhance the quality, and shared the recording with other board members. Huff and his wife sued Spaw for intentionally intercepting their private conversation in violation of Title III of the Omnibus Crime Control and Safe Street Act of 1968. The district court granted summary judgement in favor of Spaw, finding no “reasonable expectation” that the conversation would not be heard.  On appeal, the Sixth Circuit affirmed in part, reversed in part, and remanded.

Title III only protects communication when the expectation of privacy is subjectively and objectively reasonable.  The Sixth Circuit agreed with the district court that James Huff did not have a reasonable expectation that his conversation was private. Although Mr. Huff did not deliberatelydial the call, he knew that “pocket dials” were possible, and did not take any precautions to prevent them.  The court analogized Huff’s situation to a homeowner who neglects to cover his windows with drapes; under the plain view doctrine, the homeowner has no expectation of privacy in his home when the windows are uncovered. Huff could have easily utilized protective settings on his phone to prevent pocket dials.

The Sixth Circuit reversed with respect to Bertha Huff’s claim.  Bertha Huff was communicating with her husband in the privacy of a hotel room. She had a reasonable expectation of privacy in that context, and she was not responsible for her husband’s pocket dial. The Sixth Circuit feared that affirming the district court’s decision with respect to Bertha’s claim would undermine what we currently consider a reasonable expectation of privacy in face-to-face conversations. The court remanded the case back to the district court to decide whether Spaw’s actions made her liable for “intentionally” intercepting oral communications.

The Sixth Circuit’s decision leaves us with this: if you receive a pocket-dialed call, feel free to listen, record, and share (but be wary of the privacy interest of the other participants in the conversation); if you are a pocket dialer, lock your phone.

Lauren Maynard contributed to this article.

© Copyright 2015 Squire Patton Boggs (US) LLP

U.S., U.K. Governments Seek Cyber Innovations from Private Sector

The private sector is likely to produce critical cyber innovations—at least, that is what the U.S. Defense Advanced Research Projects Agency (“DARPA”) and the U.K. Centre for Defence Enterprise (“CDE”) would like to see.

In the United States, although the internet may have been invented at DARPA, DARPA is turning to a private sector competition to protect it.  In March 2014, DARPA solicited a “Cyber Security Grand Challenge”: an open competition to devise automated security systems that can defend against cyberattacks as fast as they are launched.  DARPA pitched the Grand Challenge as a “first of its kind,” “capture the flag”-style competition for computer security experts in academia, industry, and the broader security community.  Over 100 teams registered to compete.  Some likely saw the cash prizes—$2 million for first place, $1 million for second, and $750,000 for third—as nominal incentives compared to the value of shaping future cybersecurity efforts.  On July 8, 2015, DARPA announced its selection of seven finalists for the final round of the competition.  The finalists include computer security experts from industry, start-up incubators, and academia.

Not one of DARPA’s Grand Challenge finalists?  Take heart: DARPA is said to be developing technology that would allow spectators to watch the final contest in real time.  Or better yet, look to the United Kingdom, where the CDE has an open competition seeking “novel approaches to human interaction with cyberspace to increase military situational awareness.”  CDE is asking for “revolutionary approaches” to “rapidly convey” cyberspace information, events, and courses of action to military commanders, analysts, and decision-makers.  Just as DARPA officials acknowledged the limitations of existing cybersecurity strategy and technology, CDE officials have recognized that “the traditional human-computer interface” is inadequate for “current military information processing and sense-making in the cyber domain.”  Up to £500,000 in research funding will be awarded.  A July 9, 2015 presentation given by CDE is available online; slides from a July 16, 2015 webinar soon could be available, as well.  The competition closes on September 3, 2015.  Proposals must be submitted through CDE’s online portal.

© 2015 Covington & Burling LLP

Federal Trade Commission: Start with Security

On June 30, 2015, the Federal Trade Commission (FTC) published “Start with Security: A Guide for Businesses”(the Guide).

The Guide is based on 10 “lessons learned” from the FTC’s more than 50 data-security settlements. In the Guide, the FTC discusses a specific settlement that helps clarify the 10 lessons:

FTC_FederalTradeCommission-Seal

  1. Start with security;

  2. Control access to data sensibly;

  3. Require secure passwords and authentication;

  4. Store sensitive personal information securely and protect it during transmission;

  5. Segment networks and monitor anyone trying to get in and out of them;

  6. Secure remote network access;

  7. Apply sound security practices when developing new products that collect personal information;

  8. Ensure that service providers implement reasonable security measures;

  9. Implement procedures to help ensure that security practices are current and address vulnerabilities; and

  10. Secure paper, physical media and devices that contain personal information.

The FTC also offers an online tutorial titled “Protecting Personal Information.”

We expect that the 10 lessons in the Guide will become the FTC’s road map for handling future enforcement actions, making the Guide required reading for any business that processes personal information.

© 2015 McDermott Will & Emery

June 24th – Healthcare Quarterly Update: Cybersecurity and Health Data Privacy by Bloomberg BNA

Washington, DC

Join Bloomberg BNA for this essential event that explores concerns relating to cyber-security and health data privacy. Healthcare industry experts Kirk Nahra and David Holtzman will join HHS’s Iliana Peters for a comprehensive examination of:
• Big data in the healthcare sector and how to protect information
• Protecting patient and organization information
• Federal enforcement of HIPAA Privacy, Security, Data Breach rules
• Practical up to date information on current issues
• And so much more.

Click here to register today!

Identify actionable issues, secure your organization, and earn CLE credits.

A breakfast panel with accomplished scholars and an HHS representative. This conversation will address practical considerations for ensuring that patient’s data is being properly handled in full compliance with all regulations and ethical responsibilities. Healthcare practitioners are increasingly required to address concerns of Data privacy and Cyber-security; attending this panel will assist you in identifying actionable points in the law common to many legal practices.

Home Depot Moves to Dismiss Consumer Data Breach Claims for Lack of Standing

Home Depot has staked its defense of consumer claims arising from the 2014 theft of payment card data from the home improvement retailer on the asserted absence of injuries sufficient to confer standing to sue.  Because consumers rarely sustain out-of-pocket losses when their payment card numbers are stolen, lack of standing is typically the primary ground for seeking dismissal of consumer data breach claims. While many courts have been receptive to arguments seeking dismissal of consumer data breach claims for lack of standing, decisions in recent cases – including, most significantly, the Target data breach case – have found that non-pecuniary harms constitute sufficient injury to confer standing.  The survival of the consumer claims will depend on which line of precedent the Home Depot court follows.

Arguments as to standing are grounded in Article III, Section 2 of the United States Constitution, which limits the jurisdiction of federal courts to “cases” or “controversies.”   To constitute a case or controversy, a claim cannot arise from a speculative or potential harm, but rather must concern an actual or imminent injury.  Thus, in Clapper v. Amnesty International USA, 133 S. Ct. 1138 (2013), the Supreme Court ruled that mere interception of private data – in that case, by the National Security Agency, through its wiretaps of telephone and email communications – did not confer standing to sue.  Clapper held that speculation that intercepted data might be misused did not confer Article III standing; actual use or misuse of the intercepted information was required.  Defendants in privacy cases, citing Clapper, have succeeded in dismissing data breach claims for lack of standing where data breach plaintiffs have not alleged actual misuse of their data.  See, e.g., Polanco v. Omnicell, Inc., 988 F. Supp. 2d 451 (D.N.J. 2013); In re Barnes & Noble Pin Pad Litig., No. 12-8617, 2013 WL 4759588 (N.D. Ill. Sep. 3, 2013); Yunker v. Pandora Media, Inc., No. 11-3113, 2013 WL 1282980 (N.D. Cal. Mar. 26, 2013).

Home Depot’s brief in support of its motion to dismiss relies heavily on Clapper to support its argument that none of the named plaintiffs have suffered actionable injuries.  Home Depot contends that consumers could not have been injured when card issuers hold consumers harmless for fraudulent charges and Home Depot offered free credit monitoring to affected customers.  The Home Depot brief dismisses plaintiffs’ attempts to plead non-monetary harms, alleging that none of the alleged harms constitute injuries that are cognizable under Article III.  For example, some plaintiffs alleged that they suffered inconvenience and embarrassment as a result of temporarily frozen bank accounts.  According to Home Depot, in the absence of any out-of-pocket losses such alleged harms are not actionable injuries.  Some plaintiffs incurred out-of-pocket credit monitoring costs, but Home Depot takes the position that doing so was gratuitous in light of the free services offered by Home Depot.  Some plaintiffs also alleged out-of-pocket costs associated with fraudulent charges on their payment cards, but Home Depot contends that such injuries are not fairly traceable to Home Depot because such charges should have been covered by the card issuers.

There are also plaintiffs who alleged that they suffered identity theft.  Home Depot argues that such allegations should be rejected as implausible because, based on plaintiffs’ own allegations, the data theft did not result in the theft of social security numbers or date of birth information, both of which would be required to successfully steal an identity was not compromised in the HD data breach.

Although Home Depot makes strong arguments why plaintiffs lack standing, it is constrained to admit in its brief that the court hearing the Target data breach cases rejected an identical standing argument that and been advanced by Target.  In the opinion denying Target’s motion to dismiss, the court gave Target’s standing arguments cursory treatment, finding that “Plaintiffs have alleged injury” in the form of “unlawful charges, restricted or blocked access to bank accounts, inability to pay other bills, and late payment charges or new card fees.”  Although Target, like Home Depot, contended that such alleged injuries are insufficient to confer standing because “Plaintiffs do not allege that their expenses were unreimbursed or say whether they or their bank closed their accounts . . . ,” the court rejected this argument, stating that Target had “set a too-high standard for Plaintiffs to meet at the motion-to-dismiss stage.”

Home Depot characterizes the Target decision as an outlier that offers no support for its rejection of Target’s standing arguments.  Further, the Target decision did not rule out the possibility injuries alleged would not be fairly traceable to Target’s conduct, stating that, “[s]hould discovery fail to bear out Plaintiffs’ allegations, Target may move for summary judgment on the issue.”  Although the settlement of Target’s consumer claims means that the proposition will not be tested in that case, the Target court’s recognition that injury matters for standing purposes provides some support for Home Depot’s position that the Target decision should be disregarded if it is apparent at the pleading stage that no injury has occurred.